Hacker Newsnew | past | comments | ask | show | jobs | submit | jval43's commentslogin

I do this on my iPad with Magic keyboard and I'm a die hard command line user otherwise.

I think the reason I started doing it on the iPad is that the keyboard focus is sometimes inconsistent, so clicking or tab-tab-tab-enter is slower and less reliable vs. just touching the screen. Definitely feel the gorilla arm though.


Exporting originals just hangs for me. Opening or switching a photos library is basically hoping the Mac doesn't crash. Edits are locked inside the database, with no hope of ever getting them out. And god forbid you put the library on an external drive - never unplug it! It's a horrible piece of software.

I regularly back up my Photos library using rsync to prepare for the worst. From the files I see it looks like all the originals are there under /originals, albeit renamed to some UUID hash. However the EXIF data and contents seem to be intact. The number of files and their names are also stable. The database seems to be a basic sqlite DB.

I think it might make sense to extract the files directly that way, and try to see how the DB stores the original filenames. Might not be too hard. The edits though I think are applied "live" (at least for video) so it's probably impossible to get them out this way.


Not again! Had these issues with 2016 Macbook Pro (the touchbar one).

That one also wasn't a hardware limitation as it ran my displays just fine in bootcamp, but macOS would just produce fuzzy output all the way.

It's infuriating.


I agree. I've known how it works for years, and I think the current setting is a cop-out.

In TFA it's set to a measly 2MiB, yet tried to allocate 2TiB. Note that the PG default is double that, at 4MiB.

What the setting does is offload the responsibility of a "working" implementation onto you (or the DBA). If it were just using the 4MiB default as a hardcoded value, one could argue it's a bug and bikeshed forever on what a "good" value is. As there is no safe or good value, the approach would need to be reevaluated.

The core issue is that there is no overall memory management strategy in Postgres, just the implementation.

Which is fine for an initial version, just add a few settings for all the constants in the code and boom you have some knobs to turn. Unfortunately you can't set them correctly, it might still try to use an unbounded amount of memory.

While the documentation is very transparent about this, just from reading it you know they know it's a bad design or at least an unsolved design issue. It just describes the implementation accurately, yet offers nothing further in terms of actual useful guidance on what the value should be.

This is not a criticism of the docs btw, I love the technically accurate docs in Postgres. But it's not the only setting in Postgres which is basically just an exposed internal knob. Which I totally get as a software engineer.

However from a product point of view, internal knobs are rarely all that useful. At this point of maturity, Postgres should probably aim to do a bit better on this front.


Yes exactly. Although I didn't buy a new guitar, but a dozen tuners. It finally clicked when I got one that was "real time" enough to see how the tuning shifts from high to low. This was before smartphones could do it.

Doesn't help that most tuners are still dog slow, none of the beginners courses properly tell you how the guitar actually works, or what a "chord" really is. They're all just "play this and don't worry about it". To be fair it does get you going.


How can you turn it off without turning off history ("My Activity") altogether?

I noticed the "memory" too and it's turned Gemini into a useless syncophant for me, but so subtle that I almost didn't spot it.


https://gemini.google.com/saved-info

The toggle by "Your past chats with Gemini"


Using an M2 8GB Mac Mini, I only ever ran into problems when trying generative fill in Photoshop. There I get insufficient memory errors if the selection is too large.


Old too, and in my experience that was often slightly more work than fixing the bugs in my own implementation. I did swap out a borked module in the build an OS class once but otherwise used my own.

I loved those courses, great memories.


Sounds very grim. I live in a snowy part of Europe and very little of this applies, except the stay dry and warm part. Here are 2 things I learned:

1. Do what everyone else does, when they do it. And don't, when they don't. You could die.

There is usually a reason even if you don't understand it right now. You don't want to find out why when you're out in the cold and freezing.

2. Buy gear locally.

There's sometimes reason a certain item is on the shelf and not the stylish one from California, or the super heavy-duty one from Norway. Unfortunately, often this is only obvious in hindsight. Does not depend on price, but it does apply across the board from clothing to cars.


I'm in California. We have good cold-weather gear, you just have to get it from the right kind of store, specifically one that supplies outdoor workers.


Maybe California is a bad example. What I'm getting at is the selection for what you need is usually larger and more applicable to the conditions locally.

I see plenty of tourists with winter gear that is either insufficient, or completely over the top. Whereas if you buy locally you'd generally find the right stuff.


I think the bigger problem is that many of the tourists don't normally spend time outside, so they are used to only having enough gear for a heated building or car, even at home.


I've actually come around to the Postgres way of thinking. We shouldn't want or need plan hints usually.

Literally every slow Postgres statement I worked on in the last few years was due to lack of accurate statistics, missing indexes, or just badly designed queries. Every one was fixable at the source, by actually fixing the core issue.

This was in stark contrast to the myriads of Oracle queries I also debugged. The larger older ones had accumulated a "crust" of plan hints over the years. Most not so well thought out and not valid anymore. In fact, often just removing all hints made the query faster rather than slower on newer Oracle versions.

It's so tempting to just want to add a plan hint to "fix" the suboptimal query plan. However, the Postgres query planner often has an actual reason for why it does what it does and overall I've found the decisions to be very consistent.


>I've actually come around to the Postgres way of thinking. We shouldn't want or need plan hints usually.

They only come out at night, mostly.

PG is 40 years old and still has planner bugs being fixed up regularly, and having no control and waiting for a new version when a hint could fix the issue at runtime is an obvious problem that should have been addressed long ago.

It's great the devs want to make the planner perfect and strive for that, it is an unattainable goal worth pursuing IMO. Escape hatches are required hence the very popular pg_hint_plan extension.

But in the end after many years of dealing with these things I have come to the opposite conclusion, let the query language drive the plan more directly and senior devs can fix juniors devs mistakes in the apps source code and the plans will be committed in source control for all to see and reference going forward.

SQL comes from an idea of non technical people querying a system in ad-hoc ways, still useful, but if you are technically competent in data structures and programming and making an application that uses the db, the planner just gets in your way at least in my experience.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: