> For better or worse, Reddit is really the only place to go find legitimate information anymore.
This is frightening and, I fear, true.
But I'd also add one odd little counterpoint: some of the most useful discussions and learning experiences I've had in the last four years have happened in private Facebook groups. As soon as the incentive to build a following using growth-hacking and AI -- which private groups mitigate to a greater extent -- is taken away, you get back to the helpful stuff.
The FreeCAD group on Facebook is great, for example. And there are private photography groups, 3D printing groups, music groups etc., where people have an incentive to be authentic.
Public Facebook feeds are drowning in AI slop. But people who manage their own groups are keeping the spirit alive. It's almost at the point where I think Facebook will ultimately morph into a paid groups platform.
I wonder: do all the HNers who are excited about their GenAI product or wrapper or startup understand, at a fundamental level, that they are an intrinsic part of this deterioration?
Or is this one of those fundamental attribution error things:
- MY product is a powerful tool for creators who wish to save time
- THEIR product is just a poorly-though-out slop generator
Does it occur to people to instead be part of something real and visceral, and not just blame social media's ad-driven impression model, not pretend they are only part of a trend for which they can't be totally blamed?
You have only had google image search for what, 20-years? Why do you think it is a fundamental part of humanity's growth story?
You talk about being a part of something "real and visceral" but you're complaining about the demise of being able to sit at your desk and see pictures of wildlife. Maybe it's okay that google image search dies and makes people go out and find the wildlife they want to see.
The internet, even in its best format (e.g. ad-free, free access information for all; and communication with all of humanity) has a ton of real downsides. It's not clear to me that AI should be strangled in its infancy to save the internet (which does _not_ exist in that "best" format).
haha, no definitely not! The internet is mostly not "real and visceral" so losing parts of it to AI-generated nonsense IS a loss, just not a loss of the actual underlying thing (in this case: baby peacocks).
Unfortunately your comment is doing the same thing, just at a different level—something like this:
- I am a thoughtful technologist, building real things for real people, concerned about others and the social impact of my work;
- they are greedy and ignorant, destroying society for short-term personal gain, no matter what the consequences.
It's human nature to put badness on an abstract them, but we don't get anywhere that way. It's good for getting agreement (e.g. upvotes), because we all put ourselves in that sweet I bucket and participate in the down-with-them feeling. But it only leads to more of what everyone decries.
First off, no, it did absolutely not do the same thing. It was a polemic question, sure, but it was a specific criticism of a technology and its proponents.
I did not make any claims about myself at all, until I was separately accused of being something or other by someone projecting onto me whatever it was they needed to feel better about themselves.
Second, you have rate-limited me with the "posting too fast" thing so I couldn't reply to your comment or other ad hominem, even though I was posting at a rate no faster than the discussions about OpenSCAD and FreeCAD I had been involved with earlier (considerably less, I would say).
It's IMO really classless to use your administrative privileges to silence people after you accuse them of something but before they can respond, but I am not surprised to see that.
I will repeat again: I think it is really clear to me, and really to everyone I have me outside this bubble, that there is no fine distinction to be drawn between content generating AI projects that are "good" and those that are contributing to "slop". It's all slop-generation; e.g. NotebookLM is no better or cleverer than Midjourney.
Every tool HNers are excited about is going to be used to make the world's culture, and the web, worse.
I'd encourage you and those reading to consider this.
Sure, you can't make much of a change by yourself. But you don't have to be part of what amounts to inflicting automated cultural vandalism on an unprecedented scale.
Sure but doesn't every technological development have these tradeoffs?
You could say what you say about anyone at any time. Where do you draw the line? I guarantee you'll be guilty of the exact same thing. I don't want to generalize, but IMO this sentiment of yours, I hear most loudly from software engineers far removed from ordinary non-technical end users: is making beautiful new LISPs and CNIs and Python package auditing tools the only valid work with seemingly no tradeoffs?
> I hear most loudly from software engineers far removed from ordinary non-technical end users
I am absolutely not far removed from non-technical end users. They are my client base, ultimately. As a freelancer I focus on building real things that make things better for people whose faces and voices I get to know. GenAI will be useless to them, because it is antithetical to what they do.
And that focus is only getting keener; I want nothing to do with the AI-generated web.
Every technical advancement has tradeoffs. Not every technical advancement has billions of dollars sloshing around doing absolutely nothing except making the web worse and further ruining the environment. What a shockingly bad-faith way to interpret GP's argument, wow.
The comment is an interesting but very cookie cutter sort of vamp and drama. The comment trades in a bunch of generalization, much like yours, and you know, generalization doesn't feel good when it directly attacks you.
I don't sincerely believe that people who are working on Kubernetes features or observability tools are bad people. Do high drama personalities who engage in a mode of discourse of "wow" and "shockingly" say valid things too? Yeah. But it's as simple as, log in your own eye before you worry about the thorns in others. Exceptionally ironic because the poster is vamping about "Attribution errors." Another POV is, shysters project.
There's a sort of "technological fundamental attribution error" that comes into play a lot with new technologies. Every past technology has, whatever its benefits to humanity, become substantially tarnished by abuse and malicious use. But this one won't be! Promise!
That said, I don't really think this is a tide any individual market actor can reasonably stem. It's going to require some pretty fundamental changes in the way we use the internet.
I propose a new rule.
"Please respond to the actual actions and consequences of said actions, not what is said in a statement to generate positive PR. Assume putting one's money where one's mouth is, is harder to do than simply blow hot air about creating a private, ethical platform."
Sick and tired of giving parasites benefit of the doubt they've long sucked dry.
Are you saying AI isn’t useful? My product is painstakingly crafted and uses AI but in my opinion it uses it tastefully and with great utility. Also 95%+ of my development efforts are not on improving the AI even though I use a .ai TLD. I think it’s crazy for a modern company/product _not_ to use AI, and the grifters building clear wrappers for GPT and other insanely low-quality efforts are already pretty much dead.
> I'd hazard the actual problem in this picture is Ghana's GDP/capita being in 4 digit territory and not the badly disposed of waste dump.
But if Ghana became a wealthy country and chose not to accept this waste, it will end up in the next one.
The waste exists regardless, and the economic incentive for the original market "export" it, that is, hide the problem, and the receiving country to reluctantly accept it for some other consideration, whether it be money or state aid or tariff-free export of something else, will always exist while the waste does.
Re: "badly disposed of waste dump", the difference between this and landfill anywhere in the west is largely just the soil on top. Staggering amounts of recyclable and dangerous stuff still gets thrown away in inappropriate ways right near where you live, I imagine. And if the global North exports waste to the global South, sooner or later the scale almost inevitably overwhelms the receiver.
There are a finite number of poor countries. At the rate wealth is being generated it is conceivable that they all get wealthy enough that the waste gets handled well.
And this stuff all started out in heavy metals deposits, it is already present underground somewhere. The only real question is how serious the effects on humans are with any method of disposal. It isn't at all clear there is a problem as long as it is buried fairly deep and not leeching into the water table.
>> There are a finite number of poor countries. At the rate wealth is being generated it is conceivable that they all get wealthy enough that the waste gets handled well.
This waste was dumped. The fact that poor people moved to the dump to make a living scavenging is a secondary phenomenon. Without them it still would have been dumped.
Yeah but it being dumped, in the abstract, doesn't matter. It is like complaining that there is a desert or an ocean - there are places on the earth that aren't good to live in. An electronics dump somewhere doesn't rate compared to something like the Pacific Ocean in terms of how much landmass gets sterilised.
This is a bit of an imaginary solution to the problem, is it not? And there will always be poor_er_ countries, which is the thrust of my point.
The economic incentive does not go away. Not least because it is clearly already cheaper to float it away on a huge boat than bury it where it is used.
One problem is land cost: it's extremely difficult to safely build new houses on top of landfill. But that doesn't explain everything, does it? After all the USA has plenty of room to bury all its consumer waste. Why is it exporting it?
> And this stuff all started out in heavy metals deposits, it is already present underground somewhere.
It does not start out all in one place, though. It starts out in small, dispersed concentrations of heavy metals, and ends up all in a few giant landfills in poorer countries. It's not clear what the risk is, but the lack of clarity doesn't mean there's no risk.
> This is a bit of an imaginary solution to the problem, is it not? And there will always be poor_er_ countries, which is the thrust of my point.
I don't mind if the waste comes to my country. Australia is big and we're wealthy enough that it'll be handled safely. If we were the poorest country on the globe then it'd be a non-issue.
There's quite a number of posts here that seem determined to attack the integrity of The Atlantic... hm. Everything from "their writers are scared that they'll lose their jobs" to complaints about twenty year old articles not panning out correctly.
Jobs was a sort of cracked genius and a very imperfect human who wanted to be a better human. Money didn't make him worse, or better. It didn't really change him at all on a personal level. It didn't even make him more confident, because he was always that. Look back through anecdotes about him in his life and he's just the same guy, all the time.
Even the stories I heard about him from one of his indirect reports back in the pre-iCEO "Apple is still fucked, NeXT is a distracted mess" era were just like stories told about him from the dawn of Apple and in the iPhone era.
Musk and Altman are opportunists. Musk appears to be a maligant narcissist. Neither seem in a rush to be better humans.
Really? Are you sure ?
His article can basically be summed up as don’t believe AI hype from Sama. It’s not particularly well written, he’s no Nabokov. ChatGPT bangs out stuff like this effortlessly.
Here, I did it for you
https://chatgpt.com/share/67019a6c-453c-8006-88aa-6f32435492...
And it still won't produce the type of articles he produces. Because at the very least he is capable of writing new articles from something the LLM doesn't have: his brain.
Seriously. This is just the parrot thing again. The fact that AI proponents confuse the form of words with authorial intent is mindbending to me.
I’m not confused, I just disagree. I don’t think that authorial intent is something fundamentally different than text prediction.
When I’m writing out a comment, there’s no muse in my head singing the words to me. I have a model of who I am and what I believe - if I weren’t religious I might say I am that model - and I type things out by picking the words which that guy would say in response to the input I read.
(The model isn’t a transformer-based LLM, of course.)
You clearly, clearly do not understand what I am saying. But sure, waste your time and money making a parrot that, unlike the author it mimics, is incapable of introspection, reflection, intellectual evolution or simply changing its mind.
Words are words. Writers are writers. Writers are not words.
ETA: consider what would actually be necessary to prove me wrong. And when you hear back from David Karpf about his willingness to take part in that experiment, write a blog post about it and any results, post it to HN.
I am sure people here will happily suggest topics for the articles. I, for example, would love to hear what your hypothetical ChatKarpf has to say about influences from his childhood that David Karpf has never written about, or things he believed at age five that aren't true and how that affects his writing now.
Do you see what I mean? These aren't even particularly forced examples: writers draw on private stuff, internal thoughts, internal contradictions, all the time, consciously and unconsciously.
You articulate this position well. I've tried to convey something similar and it's tough to find the words to explain to people. I really like this phrase:
"Words are words. Writers are writers. Writers are not words."
I'm very bullish on AI/LLMs but I think we do need to have a better shared understanding of what they are and what they aren't. I think there's a lot of confusion around this.
Thank you. I don't think it really explains the distinction, of course. It just makes it clear there necessarily must be one, and it can't be wished away by discussions of larger training sets, more token context, or whatever. It never will be wished away.
I wonder if they will. I can't imagine anything trackpoint-like -- too expensive -- but it's not impossible they'll put a trackpad in.
The Pi 400 is small because it's meant to be aimed at the young. But young children are good with clicking buttons and poor with fine dexterity; they will likely do better with mice.
A clip-on trackpad to go alongside it would be nice, though.
I'm not really the target market but I am hoping a Pi 500 will tick all the right boxes to use as a cheap computer in a space-constrained workshop.
> since most people are likely not buying the keyboard form factor as a headless device
A big part of the aim was simply to provide a nearly-all-in-one device for educational computing.
Physical computing is part of the school curriculum in the UK, mainly starting in Key Stage 2 (7-11 year olds, though there's some in Key Stage 1, as well).
This device exists in a very particular market space where for example the CrowPi laptops sell quite well. But it has been made by team of people who are influenced by the BBC Model B and the profound impact it had.
Sure, and I'm not advocating for removing the GPIO, just that it seems like "make the GPIO require an adapter, and allow full-sized video without one" might service more use cases without an adapter, since I propose that more people want to use the Pi 400 with video than want to use it with GPIO.
I'm not saying the latter isn't a significant fraction, I'm just saying I think almost nobody buys the keyboard form factor device not to use the video with it.
The Pi 400 doesn't require an adapter, it requires a micro HDMI cable, but such a cable doesn't take up any more space than a regular HDMI cable. I don't think I've ever seen a monitor with a permanently attached HDMI cable, so what use cases does the micro HDMI preclude?
The presence of a GPIO connector in a keyboard format computer is basically the entire point of the Raspberry Pi 400.
You can consider the Pi 400 to be a very particular love letter to the BBC model B, with the GPIO port providing the kind of physical computing access provided by the "Tube" port.
This is frightening and, I fear, true.
But I'd also add one odd little counterpoint: some of the most useful discussions and learning experiences I've had in the last four years have happened in private Facebook groups. As soon as the incentive to build a following using growth-hacking and AI -- which private groups mitigate to a greater extent -- is taken away, you get back to the helpful stuff.
The FreeCAD group on Facebook is great, for example. And there are private photography groups, 3D printing groups, music groups etc., where people have an incentive to be authentic.
Public Facebook feeds are drowning in AI slop. But people who manage their own groups are keeping the spirit alive. It's almost at the point where I think Facebook will ultimately morph into a paid groups platform.