Hacker Newsnew | past | comments | ask | show | jobs | submit | ben_w's commentslogin

Indeed. One of the train stations here in Berlin is currently about 20 years behind schedule and costs were €154 million 5 years ago: https://newsroom.strabag.com/en/press-releases/group/2022-10...

Actually worst-run European nations would be, what, probably Hungary if you're only counting EU and Russia if you're not limited to EU?


So, over the next few years, it is planned to more than double the capacity of Dublin's commuter rail network. Pretty much all of the commuter rail network meets at Connolly station (closest thing Dublin has to a central station; for historical reasons it kind of has two barely-connected ones), which has only three through platforms and is already overloaded at peak times. Worryingly, no-one has even started talking about upgrading Connolly yet. The commuter rail network upgrade is split into four largely concurrent stages, and none of them _on their own_ are likely to break anything, but when they're all done it seems completely impossible that Connolly could cope.

There are worse things than expensively and slowly upgrading a station. Such as not doing that even though it will obviously be needed by, at latest, 2028 or so.

(I _think_ maybe Irish Rail's rationale is that people on the western and south-western commuter lines will transfer to Metrolink a few stations before Connolly, but see above; Metrolink stubbornly persists in not existing.)


So all these things he's saying are going to leave people scared and afraid, on that we agree. What's the disingenuous part here?

Don't get me wrong: others talk of a pattern of dishonesty, or that he's too eager to please*, and I'm willing to trust them on this because I found out with Musk that I don't spot this soon enough.

But what, specifically, do you see? What am I blind to?

* given how ChatGPT is a people-pleaser and has him around, Claude philosophically muses about if its subjective experience is or is not like a humans' and has Amanda Askell, and that Grok is like it is and has Musk, I think the default personalities of these models AI are influenced by their owner's leadership teams


He's pretending to care about the negative effects AI will have on society at large, but goes on to say it's necessary and "must" happen. If he actually cared, he wouldn't continue down that path. He also wouldn't be lobbying the DoD for contracts to use his AI to help kill people.

We're definitely not mining the moon for helium, but might well end up "mining" the gas giants.

Nah; last but one job I had an Iranian coworker, and I asked if the way the regime calls Israel and the US the "Great Satan and Little Satan" was serious or a quirk of translation.

Apparently the regime is quite serious about the US being the actual devil.


Specifically, the US federal government. Just like most Americans don’t hate the people of Russia or Iran any more than the folks the next town over, I’ve never met someone from Iran, Afghanistan, Syria, Pakistan, or pretty much anywhere else who hates all Americans. I’m sure they exist, but probably as a small minority. There’s plenty of reason to hate our government though, especially if it has threatened to destroy your entire civilization.

I don't know about the percentage of the population, but everyone who leaves Iran and learns English (or German) is much less likely to be a fan of the Iranian regime than those who never left Iran in the first place, so you'll definitely have a sampling bias.

That doesn’t mean they will become fan of either the U.S. or Israeli regimes.

Growing up in the Southern US, I met plenty "Let's bomb all the savages in the Middle Easy and take their oil" types. Some of them grew up to be self-proclaimed Nazis.

That's ignorance on top of brainwashing. If they had met the people from those countries they would drop such mindset in 30 seconds.

I'm not sure I agree. Given that the area in question here is the southern United States, and considering that racism is alive and well there, indeed with people groups they have met (and who speak English), I'm not convinced that exposure to non-whites speaking Farsi will somehow fix their attitude.

These people are racist against non-whites living in their own communities, whom they have spent their entire lives with. Meeting a dark-skinned stranger in a turban is a chance for them to confirm and bolster their biases, not to reduce them.

And even if they go through some kind of traumatic experience with a stranger from the Middle East and call them friend, it wouldn't stop the racism. I know plenty of racists with "black friends" who will tell you all about how "there are black people, and then there are n**rs". Some of their black friends will even parrot this kind of propaganda.


Yeah, buy Americans are not target of Russian aggression and violence. Russia is kinda abstract ennemy far away. Feelings get stronger when the country is actual target of bombing.

What about the Iranians being targeted by drone with Russian help?

The same Russia that Trump can’t get enough of.


US government is invoking religion in its justification, US military command has prayer meetings, they call the attack on Iran “part of gods plan”

God's angels typically don't bomb your little girl's school.

All I'm saying is, I could see how someone who believes Satan influences the world would come to that idea.


God is documented as being rather keen on genocidal smiting. That is part of the exact problem. I googled two relevant examples:

  1: God commands King Saul: attack the Amalekites and totally destroy all that belongs to them. Do not spare them; put to death men and women, children and infants

  2: When the Israelites entered the Promised Land, they were often commanded to carry out total destruction against the Canaanite nations. "they utterly destroyed all that was in the city, both man and woman, young and old, and ox, and sheep, and ass"
I'm not into religion, but it has had a massive influence on my culture (NZ) so I pay some attention to it.

Holy books seem to be buffets that people just pick their favorite dishes from, for the most part. At least, in the western world. I can't speak to elsewhere.

The historical and religious context:

1. While approaching the land, the Amalekites had attacked them, preying on the weak. God had said that they would be destroyed. Now, probably partly as a test for their first king (he failed, didn’t eradicate them), God said, get on and do it.

2. God had promised the land to Abraham and his descendants, but said they’d only get it in four hundred years’ time, because “the iniquity of the Amorites is not yet complete”—they still had time to choose God’s ways. Only once they were irredeemable were they to be destroyed.


That's from the old covenant. If you believe in Christianism the new covenant changes everything.

Lots of people who claim to be Christian still quote Leviticus as justification.

Not all of it, banning mixed fabrics (19:19) and having land ownership revert every 50 years including houses outside walled cities (25:31) and animal sacrifice (all of chapters 1 and 3) would reveal how disconnected such people are, possibly even to the speakers themselves, so it has to be selective.


Are you aware of what the US regime has done to Iran? There's a reason they say that.

Literally the devil. Not metaphorically a bunch of bastards, the actual devil. And not as performed by Tom Ellis.

There's a reason why I asked the guy.

And I asked him a few years ago now, so "what the US did" that the regime found objectionable has more to do with the US support for Israel and all the consequences of that than it has to do with any direct attacks by the USA against Iran; for direct action I think you might need to look at the 1979 revolution to undo the 1953 CIA- and MI6-backed coup?


Just because someone hates you and calls you the devil (or loves you and calls you an angel) doesn't mean they think you're literally the physical embodiment. Especially when you're not even a living being but a country or a government. I'm pretty darn sure you can assume it's a metaphor and that your coworker doesn't have evidence to the contrary.

What is more important than proclaimed words is to evaluate actions of the said government.

The Iranians have been pragmatic and relatively restrained, while USA and Israel have repeatedly escalated.

Iran has been helping USA dealing with Al Qaida in Afghanistan and Isis in Iraq. As a payback, they have been included into the 'Axis of Evil' and subjected to heavy sanctions. Just one of the cases...


Trust me, we, Ukrainians do mean that in relation to _anything_ that is to north-east of our country.

A good rule of thumb is to always say for yourself.


> Trust me, we, Ukrainians do mean that in relation to _anything_ that is to north-east of our country. A good rule of thumb is to always say for yourself.

Leaving aside that I am skeptical millions of Ukrainians sincerely believe the devil has been launching missiles at them from the northeast (regardless of what you write here)... it's rather hypocritical to speak for millions of Ukrainians and then tell me to only speak for myself, don't you think?


I think the issue is about our not believing what religious people themselves tell us about their reasoning

Hegseth reasons? I don’t see it.

That's a great reason for people like me to not use it, and I assume for people like you also, but it's not the question that organisations like the EFF need to ask.

No, the relevant questions for the EFF are the ones that the EFF put into their blog post to explain why they're not on X despite remaining on e.g. Facebook, which may or may not be the same as this tweet (I don't read tweets but did read the blog post): https://www.eff.org/deeplinks/2026/04/eff-leaving-x


> I am not a chemist so I can't back it up, but if an AI can solve mathematics it's not unreasonable to say that they can also solve creating new neurotoxins en masse.

Right now it kinda is.

LLMs can do interesting things in mathematics while also making weird and unnecessary mistakes. With tool use that can improve. Other AI besides LLMs can do better, and have been for a while now, but think about how available LLMs in software development (so, not Claude Mythos) are still at best junior developers, and apply that to non-software roles.

This past February I tried to use Codex to make a physics simulation. Even though it identified open source libraries to use, instead of using them it wrote its own "as a fallback in case you can't install the FOSS libraries"; the simulation software it wrote itself was showing non-physical behaviour, but would I have known that if I hadn't already been interested in the thing I was trying to get it to build me a simulation of? I doubt it.


Well the worst outcome is that you make something deadly which is what you are creating anyway, do that for a year and you could possibly produce a very deadly substance that doesn't have a known treatment.

"Worst" outcome assumes it's easy to give an ordering.

Which is worse, (1) accidentally blowing yourself up with home-made nitroglycerin/poisoning yourself because your home-made fume hood was grossly insufficient, or (2) accidentally making a novel long-lived compound which will give 20 people slow-growing cancers that will on average lower their life expectancy by 2 years each?

What if it's a small dose of a mercury compound (or methyl alcohol) at a dose which causes a small degree of mental impairment in a large number of people?

If you're actually trying to cause harm, then your "worst" case scenario is diametrically opposed to everyone else's worst case scenario, because for you the "worst" case is that it does nothing at great expense.

Right now, I expect LLM failures to be more of the "does nothing or kills user" kind; given what I see from NileRed, even if you know what you're doing, chemistry can be hard to get right.


As someone who also watches NileRed of course it is hard, but AI can give you solutions that normally you wouldn't be able to come up with due to lack of knowledge or/and education.

And to clarify, by 'worst case' I meant that you're already trying to create a deadly compound, worst that can happen is you kill yourself which was already an accepted risk by the user.


> The solution is the same: punish people for their crimes, don’t punish people for wanting to know things.

The LLMs aren't being punished for wanting* to know things.

The problem for LLMs is, they're incredibly gullible and eager to please and it's been really difficult to stop any human who asks for help even when a normal human looking at the same transcript will say "this smells like the users wants to do a crime".

One use-case people reach for here is authors writing a novel about a crime. Do they need to know all the details? Mythbusters, on (one of?) their Breaking Bad episode(s?) investigated hydrofluoric acid, plus a mystery extra ingredient they didn't broadcast because it (a) made the stuff much more effective and (b) the name of the ingredient wasn't important, only the difference it made.

* Don't anthropomorphise yourself


Ironically, it reads to me like they talking about the users wanting to know things, not the LLM.

The USA isn't the only country with anti-terrorism units, so there's plenty of room for systematic-US-incompetence at the same time as everyone else being diligent and working hard on… well, everything.

Information and competency are not the same thing: I know how to build a nuke, I can't actually build one.

AI is, and always had been, automation. For narrow AI, automation of narrow tasks. For LLMs, automation of anything that can be done as text.

It has always been difficult to agree on the competence of the automation, given ML is itself fully automated Goodhart's Law exploitation, but ML has always been about automation.

On the plus side, if the METR graphs on LLM competence in computer science are also true of chemical and biological hazards (or indeed nuclear hazards), they're currently (like the earliest 3D-printed firearms) a bigger threat to the user than to the attempted victim.

On the minus side, we're just now reaching the point where LLM-based vulnerability searches are useful rather than nonsense, hence Anthropic's Glasswing, and even a few years back some researches found 40,000 toxic molecules by flipping a min(harm) to a max(harm), so for people who know what they're doing and have a little experience the possibilities for novel harm are rapidly rising: https://pmc.ncbi.nlm.nih.gov/articles/PMC9544280/


Do you know how to build a nuke? You might know the technicaly details of how a nuke is made, but do you know everything that's required, all the parameters and pressure values that are required? I find that unlikely, but AI seems to be increasingly more capable of providing such instructions from cross referenced data.

That's based on a silly belief (that's becoming more obvious with AI, but is silly in general) : just because you can read about something it means you learned it.

Even if I gave you exact instructions on how to use even basic stuff like power tools - if you had no experience using stuff like grinders/saws/routers and I gave you full detailed instructions on how to do something non-trivial - you're more likely to cut off body parts than achieve what you intended. There's so much fundamental stuff that you must internalize subconsciously/through trial and error - before you can have enough mental capacity to think about the higher level objectives.

Actually AI demonstrates this perfectly - once they get RL harness for programming they start to get better at it. Without experimentation they can ingest all source code/tutorials/books in the world and still produce shit.


Even if sources have been lying to me, which is certainly possible, I believe I understand enough to determine cross sections by experiment and from that to determine critical masses; for isotopic enrichment I know about the calutron, which is meh but works and can be designed from scratch with things I know (though caveat have not memorised, just that I know the keywords "proton mass" and "Lorentz force" and what to use them for); for trigger, I would pick a gun-type design rather than implosion, again this is meh but works and is easy.

A few tens of millions of USD mostly spent on electricity, a surprisingly large quantity of natural uranium (because the interesting isotope is a very small percentage), and a few years, and I expect most people on this forum could make a Little Boy type bomb.


I Gave My OpenClaw a Robot Body and It Vibe Coded a Nuke

"Short stories from before the fall"

It's almost certainly possible, but when I've had cans of paint prepared (which, admittedly, was twice ever), they've always told me to get slightly more than I think I need because mixing a second batch that attempts to be the same isn't guaranteed to look the same when it dries. Still, if there's a continuous feedback mechanism like in this article, perhaps that's no longer true.

(Or perhaps it wasn't true even when the shop told me, and they were just repeating some second-hand knowledge that was last true a generation earlier; I wouldn't know).


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: