It's honesty sad that it took like 20 years of bullshit to get to the point where we can freely discuss this stuff without being shouted out of the room for being sexist or some other -ist/-ism. The pursuit of equity is one of the single most toxic things to happen to modern societies.
1. I don’t see how that’s better in any real way. You can infer the exact same information as querying the range and it makes dynamic behavior based on age range (ex. access to age restricted chat rooms as an obvious example) completely impossible.
2. Is it meaningfully more identifying than User-Agent? There’s dozens of other datapoints for uniquely identifying a user. If we get a few high profile lawsuits because advertising companies knowingly showed harmful ads to children, I’d consider it a win. Age is not that interesting of a data point.
I wouldn’t focus on whether it’s “identifying” but whether it’s revealing. Young teenagers are a very high-value target for advertisers. They are very impressionable, and they provide a proxy for advertisers for their parents’ money. So this law essentially makes it mandatory to share that information with advertisers. And also by proxy, predators.
It also makes it explicitly illegal to do use it for such purposes. While I agree on the point, I think in practice it changes little. I also think it could be a net positive, because now there’s no plausible deniability about the targets age, opening up a decent amount of liability for exploitative practices targeting children specifically.
It's so much better. In the one case, the OS is leaking age information (even if just an age range) to every service it talks to. In the other case, the OS isn't telling anyone anything, and is just responding to the age rating that the app/service advertises.
How would you implement a feed of mixed content? Say you're YouTube and some videos are about puppies and some videos are about guns? How would you hide only the gun videos from the homepage when the user is under 16?
I'm not even talking about entire sections that feature blatantly pornographic or perverted content, some of which are clearly aimed at a younger audience who might accidentally stumble upon it through keywords you wouldn't expect.
1. Depends on how it's implemented. It won't identify you to individual platforms if the OS filters on a per-app or per-website basis. And yeah, there would be no dynamic behavior based on age, as that would enable tracking based on age. I don't think any kind of API is the ideal solution though, it's just better than the malicious one being mandated in the Cali bill. Instead of an API, it's simpler and more effective to just have an app installation lock (like sudo on Linux) and a firewall for website blocking with a nice UI in the phone's settings, locked behind a password/pin.
2. Other data points like User-Agent are not required by law, and browsers already spoof user agent by default. I agree that there are other data points we need to address, but the problem in this specific case is the slippery slope of legally-mandated data points. And I don't think winning high profile lawsuits is a real "win", it just exposes problem which we already know in this case. Keep in mind those people can get away with the Epstein files.
Just because you have never hit the issue doesn't mean it doesn't exist. This particular issue will only really show up on notched devices with a small screen and a lot of status bar icons. It's highly dependent on what model mac you run on.
That's not what they meant. Lisp/Clojure syntax (s-expressions, code-as-data, functions as first-class values) maps closely to the theoretical foundations of computation - lambda calculus, which is arguably the mathematical essence of what computation is.
> The general public does not care about anything other than the capabilities and limitations of your product.
It's absolutely asinine to say the general public doesn't care about the quality and experience of using software. People care enough that Microsoft's Windows director sent out a very tail-between-legs apology letter due to the backlash.
It's as it always has been, balancing quality and features is... well, a balance and matters.
The public doesn't care about the code itself, they absolutely care about the quality and experience of using the software.
But you can have an extremely well designed product that functions flawlessly from the perspective of the user, but under the hood it's all spaghetti code.
My point was that consuming software as a user of the product can be quite different from the experience of writing that software.
Facebook is a great example of this, there's some gnarly old spaghetti code under the hood just from the years of legacy code but those are largely invisible to the user and their experience of the product.
I'd just be careful to separate code elegance from product experience, since they are different. Related? Yeah, sure. But they're not the same thing.
I have yet to meet anyone whose problem with AI is that the code is not aesthetically pleasing, but that would actually be an indicator to me that people are using these things responsibly.
My own two cents: there's an inherent tension with assistants and agents as productivity tools. The more you "let them rip", the higher the potential productivity benefits. And the less you will understand the outputs, or even if they built the "correct thing", which in many cases is something you can only crystalize an understanding about by doing the thing.
So I'm happy for all the people who don't care about code quality in terms of its aesthetic properties who are really enjoying the AI-era, that's great. But if your workload is not shifting from write-heavy to read-heavy, you inevitably will be responsible for a major outage or quality issue. And moreso, anyone like this should ask why anyone should feel the need to employ you for your services in the future, since your job amounts to "telling the LLM what to do and accepting it's output uncritically".
>But if your workload is not shifting from write-heavy to read-heavy, you inevitably will be responsible for a major outage or quality issue.
I think that's actually a good way to look at it. I use AI to help produce code in my day to day, but I'm still taking quite a while to produce features and a lot of it is because of that. I'm spending most of my time reading code, adjusting specs, and general design work even if I'm not writing code myself.
There's no free lunch here, the workflow is just different.
That's true, but excel '98 would still cover probably 80% of users use cases.
A lot, and I mean a lot, of software work is trying to justify existence by constantly playing and toying with a product that worked for for everyone in version 1.0, whether it be to justify a job or justify charging customers $$ per month to "keep current".
> Facebook is a great example of this, there's some gnarly old spaghetti code under the hood just from the years of legacy code but those are largely invisible to the user and their experience of the product.
I'm sure that's the case in basically everything, it sorta doesn't matter (until it does) if it's cordoned off into a corner that doesn't change and nominally works from the outside perspective.
But those cases are usually isolated, if they aren't it usually quickly becomes noticeable to the user in one way or another, and I think that's where these new tools give the illusion of faster velocity.
If it's truly all spaghetti underneath, the ability to make changes nosedives.
Facebook.com is a monstrosity though, and their mobile apps as well are slow and often broken. And the younger generations are using other networks, Facebook is in trouble.
Once someone builds a reasonable Google+ clone on ATProto or ActivityPub I'd probably switch to that. I don't think we've solved reputation when it comes to decentralized identity providers yet either.
I have some serious designs for a federated reputation system that is, as far as I can tell, novel but I haven't had time to really refine it and develop a proof of concept. Just a pile of notes for now
Have a look at how labellers work in ATProto. It forms a good foundation imo, perhaps sufficient as they currently stand. Good prior art if we abstract beyond just atproto, not sure what W3C might have already in the works that is similar enough.
https://roost.tools is another group you may look into. They are broader in scope for Trust & Safety across the internet at large. Their current focus is a couple of OSS tools for builders, but the ambition is big and something to appreciate.
I had a ReMarkable2 for awhile and don't really recommend it. It's not the same as writing on pen & paper and I like the aspects of finding different papers, pencil lead, pens, etc. anyways.
To be more specific, the ReMarkable 2 had a wildly inaccurate pen tip, but only on like the bottom 1/2 to maybe 1/3 of the screen, which was enough to completely destroy my desire to use it at all. On top of that the software is pretty meh. It wasn't bad so much as it was minimal to the point of being harder to work with than real paper. The UI was clunky and slow. Any real advantage to digital nature (built-in OCR, sorta search) was so poorly implemented that it wasn't worth it.
reply