Hacker Newsnew | past | comments | ask | show | jobs | submit | trehalose's commentslogin

If you were driving on an unmarked, unbarricaded bridge that Google Maps directed you over in a dark and rainy night, are you 100% certain you'd be driving slowly, undistracted, and checking to make sure the bridge isn't collapsed?

This analogy doesn't work because you can assume that by a bridge existing, and not having traffic cones/barriers, it's probably built by humans and is fit for use (ie. isn't half built). The same doesn't exist for LLM outputs, which is wholly generated by AI. If I was in some simulation where the environment is vibecoded by AI, I'd be very careful too.

That's kind of what I was trying to say, or at least it kind of goes along with it. This meme of "somebody drove into a river just because Google Maps told them to" is a grossly distorted retelling of a fatal accident. One could twist any tragedy into a glib soundbite about how the dead stupidly trusted other people. The street could collapse under my feet as I'm crossing it and I drown in the sewer, and people on the internet would be laughing about how I dived into the sewer just because a traffic light told me to. There were some cracks in the asphalt, so obviously I should have known it wasn't safe to walk across, but I wasn't thinking for myself.

I suppose part of the reason so many people are so dangerously trustful of LLMs is because they assume that if the LLM was put out there by decently responsible humans (doubtful, but understandable), then so too should the LLM be decently responsible? The analogy does break down there.


Yeah... Non-sentient monkey "organ sacks" as a replacement for animal testing sounds great, but those organs aren't going to function or even develop the same without a brain. At best, I think this could only be another step to filter out unsafe compounds between testing on cells and testing on whole animals. Potentially with misleading results, I imagine.

Could you give a concrete example or two of what exactly this system does? Like, what's a scientific result or two it has formally mathematically proved?


It would seem their service identifies only phishing sites as legitimate ones. It would seem 100% of sites they deem legitimate are phishing sites. Incredible.


I find it hard to imagine that the people in a position to kill those processes could ever be that zealously in love with AI, but recent events have given me a tiny bit of doubt.


I mean in the cases where higher command has said launch your nukes and lower command has not done so and everything turned out ok, I think to higher command it of course is good it worked out this time but it certainly also looks like a problem with the system that needs to be automated away. So a computer that will launch all nukes when ordered must look very appealing in contrast to humans who might save humanity.


I hope the security team talked to the legal team about that. There is potential for OpenClaw to commit crimes on behalf of the company.


The ones who give it free reign to run any code it finds on the internet on their own personal computers with no security precautions are maybe getting a little too excited about it.


That's one of the main reasons there's a small run on buying Mac Minis.


"I gave my agent access to Telegram, Webchat, and email... No data leaves my network except via Anthropic API calls and email." I wouldn't hire a security professional who's this proudly sloppy and incoherent.


> Telegram became my main interface to AgentX (my OpenClaw agent). Why Telegram?

> Secure (end-to-end encryption available)

It seems telegram bot conversations don't use the Secret Chat feature [1]. Even if they did, I'd be cautious [2].

[1] https://community.latenode.com/t/are-telegram-bot-conversati...

[2] https://blog.cryptographyengineering.com/2024/08/25/telegram...


It's strange to me have different people have different eyes for spotting AI. Sometimes I see somebody say, "That's obviously AI, look at how wrong it looks!" and I can barely see it even after they point it out. Sometimes I see somebody say, "Hm, it looks almost indistinguishable from a real photo, I might not have realized if I didn't already know," but I found it immediately jarring. This article shows three photos of ads and says the first two are clearly AI-generated and the third one possibly so; to me, the third one was the only one where I thought, "That thing looks really fucked up in one of those ways only AI can do."


If we already know enough concerns to be certain mass deployment will be disastrous, is it worth it just to better understand the nature of the disaster, which doesn't have to happen in the first place?


Not having perfect security, does not mean it will be disastrous. My OpenClaw has been serving me just fine and I've been getting value out of it integrating and helping me with various tasks.


Most drunk drivers make it home fine too


[Insert survivorship bias aeroplane png here]


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: