It's worth noting that in China, where the whole country is on a single timezone (which is roughly solar time in the eastern part, but far from it in the western part), places in the west simply have a different notion of time.
That might be true for this particular thing, but there’ll still be some other perceived barrier of complexity, e.g. maybe it’s hardware, maybe it’s math, maybe it’s some higher level application like graphics. My point was that I was reminded to not just assume something would be hard without looking into it.
No it doesn't. If you have a file that's 2^36 bytes and your address space is only 2^32, it won't work.
On a related digression, I've seen so many cases of programs that could've handled infinitely long input in constant space instead implemented as some form of "read the whole input into memory", which unnecessarily puts a limit on the input length.
The point the article makes is that a 32GB file can be mmapped even if you only have 8GB of memory available - it wasn't talking about the address space. So the response is irrelevant even if technically correct
> What they said is correct regardless of that though?
I don't think so.
Their post is basically:
>> It still works if the file doesn't fit in RAM
> No it doesn't.
Which is incorrect: it actually does work for files that don't fit in RAM. It doesn't work only for files that don't fit in the address space, which is not what the author claimed.
All memory map APIs support moveable “windows” or views into files that are much larger than either physical memory or the virtual address space.
I’ve seen otherwise competent developers use compile time flags to bypass memmap on 32-bit systems even though this always worked! I dealt with database engines in the 1990s that used memmap for files tens of gigabytes in size.
I've never seen the word "delve" show up with such frequency in the pre-AI era, but now it's an overwhelmingly large signal of LLM-generated text, so I'm not sure where that came from. Ditto for vomiting emojis everywhere.
I have heard that the human trainers for early LLM models were overwhelmingly from West Africa, so some of the word choices reflect that, including a preference for the word delve. This now means that humans from that part of the world are now frequently unfairly suspected of being AI.
Some UI animations are slow and jittery - and this is on an M4 Pro.
It's clear that no one at Apple (or any other big tech company these days) has ever watched old demoscene productions, then contemplated their performance against the available computing power of their current products and the experience thereof, and thought "something is very wrong".
Not sure we need another term for this, as "utilities" has been the accepted term for various one-off programs that do miscellaneous things, and of which power-users will tend to have a rather large collection of.
However, the term reminded me of a memorable interaction I had many decades ago with an old woman who wanted to write a program in x86 Asm to manage various aspects of the plants in her garden. (She did succeed at doing so.)
"Utilities" is a generic term suggesting it is small, potentially reusable, purpose-limited, and used to simplify a task.
"Utilities" doesn't indicate the audience or the intended longevity of use of the tool like "houseplant" and "bouquet" do.
Both indicate they are built for personal use cases, suggesting potentially low reusability. The longevity of "houseplant" suggests it's intended for ongoing use, while "bouquet" suggests a limited use tool.
With work, either could be made reusable for others, but I think it's implied that the scope is an edge case or uncommon case that likely only applies to its creator or a very limited audience.
I see value in the terms, but these terms may themselves be houseplant terms, not sure if general adoption is useful to someone not building houseplant software, they are mostly hobbiest terms by definition.
I was surprised when I actually dabbled in x86 ASM (in the guise of MASM which arguably is a higher-level language than direct ASM) with BIOS and DOS interrupts as functions - it's quite close to C and not at all difficult - just tedious.
A powerful editor/IDE makes it ... not the worst programming experience in the world.
And since it's "so detailed" it's pretty easy to understand and explain, unlike higher-level languages that "do everything for you".
Home-cooked doesn't sound right. The term Handmade has a fair bit of usage for this already and has a lot less semantic friction in my head. Homemade isn't bad either, if you really like the cooking analogy.
Where should a cord on a mouse be when it's charging? The same place as any cord on a mouse should be, i.e. the tail, would be the commonsense answer. Indeed this is how all other dual-mode mouses do it.
reply