Hacker Newsnew | past | comments | ask | show | jobs | submit | marhee's commentslogin

Doesn’t this conflate dry-running with integration testing? ASAIK the purpose of a dry-run is to understand what will happen, not to test what will happen. For the latter we have testing.


> ASAIK the purpose of a dry-run is to understand what will happen

Right - so the dry-run has to actually do as much of 'what will happen' as possible, except the actual things.

You want to put the check as far down, close to the 'action' as possible. You don't want any additional business logic gated by the dry run check.


> ASAIK the purpose of a dry-run is to understand what will happen, not to test what will happen. For the latter we have testing.

Not really. Testing is a way to increase confidence that code does what it is specified to do, because it is cheaper than full-blown formal analysis :)

The problem raised by OP here is granularity. Operation like `update(record, field, value)` is itself a tree of smaller sub-operations that may do some permissions checking, locking, network calls, even checking for presence of record if it has upsert semantics, all of which could fail. A dry run with a plan that is too coarse can succeed while the actual operation fails over things left unchecked.


Yes, but it depends on the context.

For little scripts, I'm not writing unit tests- running it is the test. But I want to be able to iterate without side effects, so it's important that the dry mode be as representative as possible for what'll happen when something is run for real.


You understand how subjective that is right? Someone might expect that the database doesn't do the last commit step while other people is perfectly happy that the database engine checks that it has enough writing permissions and is running as a user that can start the process without problems.


Sure, where you draw the line will vary between projects. As long as its exact placement doesn't matter too much.

For me personally, I tend to draw the line at write operations. So in your example, I'd want a dry run to verify the permissions that it can (if I expect those to be a problem). But if that can't easily be done without a write, then maybe it's not worth it. There are also situations where you want a dry run to be really fast, so you forego some checks (allowing for more surprises later). Really just depends.


I'd argue the dry run is a form of integration testing: Essentially the writes are mocked, but the reads are still functional.


> Coyuld anyone summarize why a desktop Windows/MacOs now needs so much more RAM than in the past

Just a single retina screen buffer, assuming something like 2500 by 2500 pixels, 4 byte per pixel is already 25MB for a single buffer. Then you want double buffering, but also a per-window buffer since you don't want to force rewrites 60x per second and we want to drag windows around while showing contents not a wireframe. As you can see just that adds up quickly. And that's just the draw buffers. Not mentioning all the different fonts that are simultaneously used, images that are shown, etc.

(Of course, screen bufferes are typically stored in VRAM once drawn. But you need to drawn first, which is at least in part on the CPU)


Per window double buffering is actively harmful - as it means you're triple buffering, as the render goes window buffer->composite buffer->screen, and that's with perfect timing, and even this kind of latency is actively unpleasant when typing or moving the mouse.

If you get the timing right, there should be no need for double-buffering individual windows.


You don't need to do all of this, though. You could just do arbitrary rendering using GPU compute, and only store a highly-compressed representation on the CPU.


Yes, but then the GPU needs that amount of ram, so it's fairer to look at the sum of RAM + VRAM requirements. With compressed representations you trade CPU cycles for RAM. To save laptop battery better required copious amounts of RAM (since it's cheap).


I will definitely reads these books when they come out.

For a historic overview of mathematics with (accessible) formulas I highly recommend “Journey through genius: The great theorems of mathematics”.


Concurrent programming is hard and has many pitfalls; people are warned about this from the very, very start. If you then go about it without studying proper usage/common pitfalls and do not use (very) defensive coding practices (violated by all examples) then the main issue is just naivity. No programming language can really defend against that.


You are completely dismissing language design.

Also, these are minimal reproducers, the exact same mistakes can trivially happen in larger codebases across multiple files, where you wouldn't notice them immediately.


The whole point of not using C is that such pitfalls shouldn't compile in other languages.


Maybe the real reason is more related to Price’s law/Pareto’s principle, loosely meaning that 90% of the work is done by 10% of the people. In other words, in large companies most perons do not contribute much, at least not at the same time.


Maybe, yeah.

And it's also quite possible that my view (which was across a slice of new-technology stuff hosted by the "innovation" arm) was skewed, and things aren't the same elsewhere in the company.

I just remember being shocked by the negativity.


Anyone knows what does "native" means here precisely? Steam Deck has a x86-64 instruction set AFAIK, so it's just same as a the Windows version? Or has it to do with the GPU / OS? Or does it just mean "properly configured"?


It means compiled for Linux/SteamOS instead of being compiled for Windows and using a compatibility layer to play.


Native as in it's a Linux binary, no wine/proton involved


If this thinnest iphone air has 27 hours of video playback, why does the regular iphone 17, which looks twice as thick only has 30 hours? At this point, I just want long battery life. Like an "all-week" battery life would be a nice start.


they are mostly likely using a higher density battery in the air, at least that's what the rumors suggest


If you enjoy this art-style, definitely check out the game Return to the Obra Dinn.


There’s a ditherpunk artist in Moscow named Uno Morales that I’m quite fond of: https://unomoralez.com/


I was just about to post the same link! Found his site today by pure happenstance.

Don't know this guy's technique, but the idea that people were drawing such elaborate pictures on tiny screens - with mice! not even tablets - boggles me. Every pixel a deliberate act.


Well, I use it before google, since it in general summarizes webpages and removes the ads. Quite handy. It’s also very useful to check if you understand something correctly. And for programming specifically I found it really useful to help naming stuff (which tends to be hard not in the least place because it’s subjective).


It’s a clever trick. But can it render a textured text? Transparent text, gradient fills? Maybe it can, I dont know. But why not just triangulate the glyph shapes, and represent each glyph as a set of triangles. This triangulation can be done offline, making rendering very lightweight.


The linked post was about Evan's side project, but within Figma, all of that is indeed possible. The glyphs are transformed into vector networks[0], which has a fill pipeline that supports transparency, gradients, images, blending, masking, etc.

[0]: https://www.figma.com/blog/introducing-vector-networks/


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: