Hacker Newsnew | past | comments | ask | show | jobs | submit | ryanmk's commentslogin

I've plugged this before, but this is something that has really, actually, worked for me: beeminder.com


Dude sounds like he's stuck in his own head.


He seems lucid enough.

His points about rape are on mark. It seems to tie in into general pattern of squeamishness when it comes to sex.

His point about superheroes being detractors from reality is an interesting one. Not sure if it's as bad as he claims, but it makes sense.


Dude has a large, popular and critically acclaimed body of work behind him. I'm interested to hear from him.


his head is quite well regarded by many people.


It would be nice to see the numbers, in addition to the graph.

I did a straight port of C to lua, and ran it with luajit.

The C code ran in 49.901s.

The luajit code ran in 14m16.547s.

C code was compiled with -02 -std=c11 -lm (version 4.8.1, mingw).

luajit.exe was 2.0.3, static build, using VS2012 x64


It would also be nice to express numbers in the same base too. :-)

"The luajit code ran in 856.547s."

The graph is log-log which can be a bit tricky to read but that puts it in the same ballpark as Ruby and PHP (assuming the same factor of difference between the two, after scaling your C vs luajit results to match the given C value of ~40s, luajit becomes 687s.)


Did you turn off the JIT? My LuaJIT ran 2.16x slower versus 8x for Lua 5.1, 9.35x 5.2, 10.33 5.3. (I didn't localize the math.sqrt but I know LuaJIT wouldn't care.)

* Oops, I had cheated and used a for loop instead of while. The LuaJIT time was the same but the others are 10.5x, 12.58x, 15.63x.


On OSX 10.1 with luajit 2.1.0 alpha, a straight translation of the C code gets me 59 seconds on my machine which is close to the C speed on it (38 seconds compiled with O3)

Edit: fixed luajit version


I've placed my port here: https://gist.github.com/anonymous/ce176d7ab4f6b7b1ba91

If you see anything wrong with it, or odd, feel free to share.

I'm still investigating what is happening to make the run so slow, so if you can find something wrong in my code, that would help.


Lua is global by default. Declare all the variables as local and you'll see significant improvement. Also, there is a boolean type so you can use true and false directly instead of comparing numbers.


I tried using locals, and there was no change to the time. Using a boolean return value for isPrime shaved off two seconds.


if you run it with luajit -jv primes.lua you'll see NYIs about math.mod not implemented.

Replacing math.mod(n, i) with (n % i) gives roughly 9.4x performance.

EDIT: luajit version was LuaJIT 2.0.4 on Mac OSX


Thanks, using % did the trick.


I use lua for most scripting. A big reason for this is that the entire lua environment is in lua.exe. This allows for me to distribute scripts easily without having to include a C++ installer with them.


Lua is an underrated gem in my opinion if it comes to scripting.


Love Ted Chiang!


Isn't that called a "paycheck"?


Well, you can increase your paycheque by getting promoted... If the 'heroes' are the ones that get promoted, then they are getting more then the unsung heroes.


I suppose that's not a terrible metric in theory, but by that token the people who make mistakes are, in the general case, still taking more credit since they are also are paid to fix the problems.


But one example of the dysfunctional situation present in that case. Many other examples of individualistic politics exist.


A "paycheck" is too individual to successfully result in quality in an organization larger than a couple dozen.

The correct method is to understand the system of work, and implement good process to continually improve it. Quality needs to be built in from the beginning.


I agree with the parent. Upon encountering the page, my first feeling is that this is one of THOSE landing pages.

Of course, I know this is just me bringing my biases to the thing, but that's the way it is.


I'd like to see what his take would be on a Groundhog's Day scenario.


The website doesn't say what LTS haskell is. A curated set of packages? Of what?


Haskell's package repository, Hackage, changes very rapidly. The best policy that exists today for determining whether packages will work together, the Package Versioning Policy, is a bit like Semantic Versioning. This allows you some chance in hell of finding compatible packages, but it's very low information and often fails. This is exacerbated by a few things

1. Due to static typing package combination errors are detected at compile time not runtime. This means that potential mismatches in package versions are detected and punished very early and very severely. This is why Cabal compares so disfavorably to NPM. It's not that NPM is more advanced, it's that Cabal tries harder to ensure that package mismatches don't survive build.

2. There are differences in opinion on the right place to put burden of package matching—should the maintainers exhaustively test things and maintain accurate and complete information at all times? Or should people just tell them quickly when things break. Unfortunately, these two methods of handling versioning don't live together nicely and they wreck havoc for the solver.

3. Haskell as a language strongly promotes chunking off nice, trustworthy abstractions and squirreling them away into their own package. This means that there are a lot of packages all versioning against one another and the package matching problem is quite large.

So, atop this a few "curated sets" have been built which push the package managing troubles to someone else---namely a build farm where, arguably, it ought to be. These sacrifice bleeding edge availability for a bit less personal burden finding compatible package sets. The two I use are called Stackage and Nixpkgs.

Stackage is in particular supported by the company FPComplete and FPComplete sells Haskell solutions to industry. They are therefore well-interested in providing a support solution for stable sets of packages from Stackage. This may even just be a formalization of what they already do since they probably cannot justify code breakages to customers as "well, the state of the art of libraries keeps changing".

As far as I can tell these LTS package sets are stable points in the master package set which will receive bugfixes and unquestionably unbreaking feature updates alone. In terms of the PVP, they can receive minor version bumps but are protected against major ones. Stackage is a rather massive set of packages, so this is done in a pretty conservative manner and it seems reasonably experimental at the moment.

https://www.fpcomplete.com/blog/2014/12/backporting-bug-fixe...

In particular, it'll come down to the willingness of library maintainers to backport bugfixes to prior major versions of their libraries. Good library maintainers have support windows they care about already, but not everyone has the resources to pull that off. LTS will feel this differential effect.

Anyway, hopefully that's a good, reasonably correct overview. Corrections welcome.


I have a theory, which I'd like to actually put data to ever, that there's a 3.5 involved - lots of diamond dependencies.

I think this happens because so much that's used by everything isn't a part of the standard library (text, containers, mtl).

There are upsides and downsides to that.


Yeah, agreed. High reusability turns out to cost a lot in proper dependency management...


My best understanding from the posting is that it's going to be a "stable" version of the "Haskell Platform" set of packages.

Haskell has been notorious for moving (too?) fast while not maintaining decent backwards/forwards compatibility, a problem made worse by some tooling issues ("cabal hell"). Now most of the technical issues should be solved.


As far as I'm aware this has nothing to do with the Haskell Platform and is, in fact, more or less a competitor.


It does seem to be a competitor. They could use the same infra to do the HP versioning and builds. Some NIH going on? The key here seems to be the level of automation, which is great to see.


Unfortunately, I think it's a bit of a branding thing, too. To most people's eyes the HP is something of a failure since it's so slow to release—never mind the stabilizing effect it has had on all constituent projects. Stackage thus grabbed some market- and mind-share by promising a closer-to-bleeding-edge update path at the cost of lesser stability. From this position they're peeling back a little bit, trading off newness for a bit more stability, in what appears to be the easiest-to-maintain fashion.

So from that, it's sort of culturally incompatible with HP, unfortunately. It'd be very nice if some crosscutting could be had once-and-if LTS stabilizes as a product.


Actually, the ideas for LTS Haskell came out of conversations I had with Duncan and Mark at ICFP. The original idea was to create a GPS Haskell that would encompass a "best of both worlds." LTS Haskell is a first step towards that, and I'm hoping that Haskell Platform and Hackage ultimately fold this stuff back in.

I went into more detail on this history in the previous blog post: https://www.fpcomplete.com/blog/2014/12/backporting-bug-fixe...


Sounds good. I had assumed you were heading in this direction. The approach was something Duncan and I wanted to try in 2007/2008 but we didn't have resources at the time. Now the infrastructure is there, automatically identifying stable sets, tagging and releasing them is a good step. If you can get to the point of computing the next HP set in the same fashion, that will be a big win for stability.


The core set of packages (e.g. those in the platform) tend to not break backwards compatibility, but you're right that there are still packages that move fast and do.


I use Beeminder.com to stay motivated. They have githun integration, which helps.


Thanks for the Beeminder plug! Our GitHub integration is at http://gitminder.com


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: