Hacker Newsnew | past | comments | ask | show | jobs | submit | xtajv's commentslogin

The phrase "don't give them ideas" comes to mind.

This is the part where you'd normally pull the junior engineer aside and politely give them a stern talking to until they understood what they did wrong.

If anybody has suggestions for how to do this with LLMs (short of maintaining CLAUDE_wall_of_shame.md), please share.

Edit: for the record, yes I do run a linter, and generally try not to impose bikeshedding or soapboxes on my peers. It's just that there are certain patterns that I personally am not going to commit under my own username as the engineer of record.

Edit 2: I saw another comment recommending "Always confirm with me before doing $x" (and then always denying). Seems like it might work.


What I do to avoid this is to manually approve each change Claude is doing

I think the yolo mode of auto approve changes is to the root cause, which is probably a little embarrassing to be that engineer we’re all collectively pulling aside to ask:

Is this the result of automatically letting the robot tune your machine?


It occurs to me that software engineering is just about the only engineering field which is neither licensed nor bonded nor insured.

I wonder if AI / shadow IT will change that.


I wonder if AI / shadow IT will change that.

I doubt it.

Computing has traditionally been all about math and logic. This is really all that a binary logic computer is capable of. When applied to this purpose, it can offer highly accurate results at very low cost.

Current AI is an attempt to branch out from simply calculating into decision making. But it does so in the worst possible way --- using probability and statistics (aka guesswork) instead of logic and reasoning. In other words, AI offers questionable results at high cost.

As this article shows, relying on guesswork is a legal liability issue waiting to happen in many (if not most) operating environments.


Heh, I wasn't suggesting that AI would actually replace decision-making. Rather, I wonder whether attempts to use AI in this way would result in such publicly-embarassing and catastrophic outcomes that software engineers might decide to organize professional guardrails about it.

I fully agree, this seems like a legal liability issue waiting to happen.


software engineers might decide to organize professional guardrails about it.

Engineering isn't the one creating the liability issue here --- marketing is.

And it starts with using the word "intelligence" to describe what an LLM does.


> These dialogs always prompt me to chime in with my solution: make the police be self-insured, backed by their pension fund.

I'm curious, what exactly do you mean by "self-insured"?

(Is the idea to combine literal insurance underwriting for retirement planning with a monetary incentive system for ongoing work performance)?


They mean that penalties and restitutions for wrongful prosecutions and wrongful convictions should not come from taxpayer money but private insurance. Right now, police departments feel zero pain from judgements against them so they have no reason to structurally correct their behaviour.

how is police going to pay for private insurance though? from police officer salaries (which come from taxpayers)?

(Nonconsensual) genital mutilation is bad no matter who you are or what parts you have.

Also: If pain becomes a contest, we're all losers.

Also: Thank you for complaining. There is much to complain about. There's so much to complain about that we can sit in a circle and take turns complaining and everybody will probably learn something.


Spot to complain that I missed a spot:

(P.S. you can also add a new thread)


Spot to complain about intersex genital mutilation:

Spot to complain about female genital mutilation:

Spot to complain about male genital mutilation:

My understanding is that (nonconsensual) circumcision of infants is quite common in some regions of the planet, and that some impacted individuals wish that this decision had not been made for them without their consent.

That seems bad.


I remember taking a machine learning course in which the instructor explicitly warned us to make wise fiscal decisions, based on the assumption that ML funding follows a hype-driven boom/bust cycle.

"Save during the summers and you'll make it through the winters".


can confirm. am weird enough to routinely flag as "inhuman".

thaaaaaaaaanks


In hotels of all tax brackets, you usually get a room key.

And the salient difference is that CCTV is simply defense-in-depth, not a primary means for authentication.


Earnest question: if I was feeling lazy and security-conscious at the same time, would I be better off...

(A) opening chatgpt.com in qubes (but staying logged out, i.e. never creating a chatgpt account)

-or-

(B) creating a freemium chatgpt account

?

(Obviously, the "best" answer would be something like running a local LLM from an airgapped machine in a concrete bunker :) But that's not what I'm after).


[Obligatory: Engineering background. Not an expert]

I've always found it a bit odd that we DO define "i" to help us express complex numbers, with the convenient assumption that "i = sqrt(-1)"... but we DON'T have any such symbols to map between more than 2 dimensions.

I felt a bit better when I found out about - (nth) roots of unity (to explore other "i"-like definitions, including things like roots of unity modulo n, and hidden abelian subgroup problems which feel a bit to me like dealing with orthogonal dimensions) - tensors (e.g. in physics, when we need a better way to discuss more than 2 dimensions, and often establish syntactic sugar for (x,y,z,t))

IDK if that helps at all (or worse, simply betrays some misunderstanding of mine. If so, please complain- I'd appreciate the correction!)


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: