Hacker Newsnew | past | comments | ask | show | jobs | submit | GuB-42's commentslogin

Inacceptable! This incident will be reported.

Same thing for me, I love the technical aspect in the same way I love the demoscene. The visuals are original and very well done, so much that it is now almost impossible to talk about dithering without mentioning the Obra Dinn at some point.

I am also a fan of puzzle/detective games, and this is an excellent one.

Truly a masterpiece in both visual and gameplay, but together... not so much. For a game where understanding every detail of the scenes is critical, it felt I was fighting the game engine. Many times I wished I could turn off the dithering effect and see the underlying models with more standard shading. At no point it felt unfair, they really did a good job making it functional, but it was a distraction.

Not enough of a distraction to stop me from completing and enjoying the game and art. But hadn't the art style been unique, I would have enjoyed it much less.


> Over the last couple months, I've been building world bibles, writing and visual style guides, and other documents for this project…

> After that, this was about two weeks of additional polish work to cut out a lot of fluff and a lot of the LLM-isms.

There is a substantial amount of work here, comparable to how long a human writer would take to write from scratch, definitely not slop. I think we can call it AI-assisted, not AI-generated. Even the illustrations are well above average.


IIRC the Limbo team grew over time, it shows in gameplay, the second half feels less personal and more like a generic platformer. They hired more people to get done with it. It don't mean it is rushed, but to me, the second half lacks personality compared to the first half.

Also, I don't like the idea of using gameplay time as a value statement. Maybe I say that because I don't have money problems, but I find that the tendency certain gamers have of judging games in terms of dollars per hour of gameplay is pretty damaging, as it incentivizes developers to focus on gameplay time more than polish. 3-10 hours is already plenty for that style of game. Note that there are many AAA games in the 10 hour range.


You can be comfortable about the concept, but not comfortable about the interview.

The way I understand it, OP asked this as a way to open the conversation, while candidates interpreted it as a math problem to solve, unintentionally getting their mind into "exam" mode.


It reminds me of "bazaar" shops I lived next to at some point.

They had all sorts of cheap objects shaped like simple household items. Obviously, you don't expect premium quality when you buy these things, but you expect it to at least have some function, but they manage to fail at the most simple things. Examples:

- Sewing needles with the eye too small to fit a thread into, they also bend as easily as a piece of wire

- Tubes of "super glue" that are mostly empty, also when you press on it, it all goes on your finger instead of out of the nozzle

- Screwdriver bits with tolerances so loose they don't even fit the screw, some even had bubbles inside, like swiss cheese

- Packing tape that doesn't stick to carboard, at all

- Steak knives that break at the handle as soon as you start cutting steak with them

- "squares" that are off by more than 1 degree


There is much better than JPEG, however, because still images are not really a problem in terms of bandwidth and storage, we just use bigger JPEGs if we need more quality. The extra complexity and breaking standards is not worth it.

This is different for video, as video uses a whole lot more bandwidth and storage, it means we are more ready to accept newer standards.

That's where webp comes from, the idea is that images are like single frame videos and that we could use a video codec (webm/VP8) for still images, and it will be more efficient than JPEG.

That's also the reason why JPEG-XL is taking so long to be adopted. Because efficient video codecs are important, browsers want to support webm, and they get webp almost for free. JPEG-XL is an entirely new format just for still images, it is complex and unlike with video, there is no strong need for a better still image format.


IMO most of JPEG-XL's value is in the feature set. Having a format that can do transparency, proper HDR, and is efficient for everything from icons to pixel art to photographs is a really strong value prop. JPEG, Webp, and AVIF all have some of these, but none have all. (AVIF is the closest, but as a video codec it still has a pretty significant overhead for small images like icons).

The opposite conclusion can be taken from the premise of rule #1 "You can't tell where a program is going to spend its time"

If you can't tell in advance what is performance critical, then consider everything to be performance critical.

I would then go against rule #3 "Fancy algorithms are slow when n is small, and n is usually small". n is usually small, except when it isn't, and as per rule #1, you may not know that ahead of time. Assuming n is going to be small is how you get accidentally quadratic behavior, such as the infamous GTA bug. So, assume n is going to be big unless you are sure it won't be. Understand that your users may use your software in ways you don't expect.

Note that if you really want high performance, you should properly characterize your "n" so that you can use the appropriate technique, it is hard because you need to know all your use cases and their implications in advance. Assuming n will be big is the easy way!

About rule #4, fancy algorithms are often not harder to implement, most of the times, it means using the right library.

About rule #2 (measure), yes, you absolutely should, but it doesn't mean you shouldn't consider performance before you measure. It would be like saying that you shouldn't worry about introducing bugs before testing. You should do your best to make your code fast and correct before you start measuring and testing.

What I agree with is that you shouldn't introduce speed hacks unless you know what you are doing. Most of performance come from giving it consideration on every step. Avoiding a copy here, using a hash map instead of a linear search there, etc... If you have to resort to a hack, it may be because you didn't consider performance early enough. For example, if took care of making a function fast enough, you may not have to cache results later on.

As for #5, I agree completely. Data is the most important. It applies to performance too, especially on modern hardware. To give you an very simplified idea, RAM access is about 100x slower than running a CPU instruction, it means you can get massive speed improvement by making your memory footprint smaller and using cache-friendly data structures.


> If you can't tell in advance what is performance critical, then consider everything to be performance critical.

As for rule 2: first you measure.


Millennials love their weed, party drugs too, it took over Gen X drinking in some way.

But I find Zoomers to be rather tame in terms of drinking, smoking, drugs, unsafe sex, etc... Few of the traditional vices, really.


The Dacia Spring proves that it doesn't have to be the case. The base version doesn't even have a touchscreen, let alone internet connectivity. It is a cheap car, in every sense of the word, but is shows that not every EV has to be like Tesla.

The issue is the small actual range on the Dacia Spring. Great for grocery shopping and going to work in a city setting, bad for long journeys in the winter time. Basically what people want is exactly that type of barebones EV, but with more battery.

That’s genuinely nice that it doesn’t have the multimedia crap. They do also have an “extreme” model with touchscreen and connected services. At ~220km range it probably has about 100km in winter though. :-/

Good for them, and thank you for the tip!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: