Hacker Newsnew | past | comments | ask | show | jobs | submit | dpark's commentslogin

That’s fair. You can decline to participate in casual conversations and be annoyed.

Most people don’t mind someone initiating a casual conversation in a non threatening manner. Most will enjoy it, at least sometimes.

I’m happy for the author here, especially that he was able to shrug off these awkward interactions and move on.


It’s basically “If you want to be liked, you should try to be likable.”

Is the only way to not be manipulative to be a curmudgeonly jerk?

If being pleasant means being manipulative, then indeed everyone should try to be a bit more manipulative.


Yeah. It's only wrong if there's deception involved, or a failure to care about the needs of the other.

> Influence doesn’t have to be manipulative

> influence is a euphemism for manipulation

Surely you can see that your statements contradict each other.

> Influence for influence sake is selfishly motivated.

Hard disagree. It certainly can be, but doesn’t have to be. A person can be a positive influence for no other reason than they feel like it’s a good thing to do. You could influence your coworkers to be better engineers and not gain anything from it.

I mean, we could retreat to the “oh you feel good about it, so it’s still selfish” stance, but that’s uninteresting.


> “influence” is a euphemism for “manipulate.”

This is exactly what he’s talking about.

The premise of the book is essentially, “what if you were a generally nice person who deserved friends”.

The whole “you could only possibly pretend to care about other people” response to the book is vaguely psychopathic.


> The whole “you could only possibly pretend to care about other people” response to the book is vaguely psychopathic.

I prefer to interpret it charitably: the line between influence and manipulation can be pretty fuzzy, and some people come to a conclusion of, basically, "don't do it at all because it's always selfish."

I think it's a flawed view because it's impossible to go through life not influencing anyone and not wanting anything from anyone, so you may as well try to do it in a way that is generally win-win or at least not win-lose.


I think the most charitable interpretation would be that people expressing that view are deeply self-conscious. They are afraid if they followed the advice in the book, they might be perceived as manipulative and they want to avoid that possibility. They hide from that fear by insisting that it actually must be manipulative.

Outside of that, I can only see less charitable interpretations. e.g. The idea that the only reason someone could ever compliment another person is to manipulate them says either that the person holding the idea literally can’t imagine interacting positively with someone for non-selfish reasons (psychopathy) or that they hold such low opinion of the rest of humanity that they believe no one else could (misanthropy).


> I find it hard to believe that there is a demographic of people that were yearning to write code, but simply could not because they lacked LLMs. Is it the price?

Yes, because the price is measured in time.

With LLM tooling I’ve churned out idiosyncratic tools that fit my use cases quickly. Takes maybe a day instead of a week. A week instead of months. The fast turnaround changes the economics of writing custom tools for myself.


This is vapid condescension.

The comment you replied to made no statements about math or proofs. They made a statement about working in systems of non determinism effectively. Your statement seems to imply that this is dumb, as if working in a world of full determinism is an option.


Thank you for "vapid condescension".

I've wanted a term for this for decades!


when you do have the option of determinism, but intentionally eschew it in favour of a strictly inferior nondeterministic tool, then yes, it is kinda dumb.

What deterministic option are you referring to here? Humans certainly are not deterministic in how they interpret instructions and write code. If I asked you to implement a feature and a month later asked you to implement the exact same feature, you likely wouldn’t do it the same way again. Two different people certainly wouldn’t.

When you cling to determinism and call a clearly useful and powerful tool “strictly inferior” I would say this misses the point.

strictly inferior != bad. It's relative. One tool will still give the output i intended long after I'm dead and decomposed, with the other might not at the very next time i run it.

Each move from one layer of the tech stack to a higher one involved a function:

      f(x) -> y
Given a specific x, you always get a specific y as the artifact being generated.

Not at all. If this were true then the Python code in question would generate deterministic binary. Of course that’s not what happens. The Python runs through an interpreter that may change behavior on different runs. It may change behavior version to version. It may even change behavior during multiple invocations of a function in the same running instance. Because all of that is abstracted away.

Same for the C code. You give up control and some determinism for the higher abstraction. You might get there same output between compilations on the same version but that’s not actually guaranteed and version to version consistent certainly isn’t.

Moving to a higher layer of abstraction very often results in less constrained behavior.


That's not a good analogy.

With a high level language implementation of any sort, the actual instructions executed by the CPU may vary according to how it was compiled or run, or what machine you run it on, but the behavior will not.

The high level language defines it's own level of abstraction, defining exact behavior, allowing the developer to have full control over program behavior, algorithms, UI, etc.

An LLM + natural language instructions is not a program-like abstraction of what you want the computer to do, because it does not have that level of precision. Natural languages are fuzzy and imprecise, because they have been developed for communication, not precise machine-level specification.

Obviously you can "vibe code" at different levels of detail, ranging from "build me an app to do X" to "here are 20 1000 word essays specifying the dos and don'ts of what I want you to build", but in either case you are nowhere near the level of precision of using a programming language to specify exactly what you want.

So sure, "vibe coding" let's you accomplish A result with less attention to detail than using a programming language, but it's not a "higher level abstraction" in the sense that HLLs are, since it doesn't define what that abstraction is. It lets you get A result, but not define a SPECIFIC result, since natural languages just aren't that precise... natural language means whatever the person/thing interpreting it interprets it to mean.

Of course you can use an LLM as a way to "rough out" a function or app, and as a crude tool to manipulate that roughed out form (or an existing project), but natural language does not have precise semantics and therefore cannot provide a precise definition of what you want to do.


It wasn’t my analogy. It was the article’s so I responded to that. There are many more (and better) examples of how abstractions give up control, precision, and/or determinism.

> The high level language defines it's own level of abstraction, defining exact behavior

This is not entirely true. A high level language defines some behaviors, leaving many behaviors to be undefined and implementation specific.

Many of those unspecified behaviors can matter in some cases.

> It lets you get A result, but not define a SPECIFIC result, since natural languages just aren't that precise...

You are repeating the same error as the article and missing the fact that while an abstraction lets you specify or control some things, it leaves many things out of your control. The higher the abstraction, the more stuff that is left out of your control. And maybe you don’t care about the things outside your control (great, the abstraction worked!) but regardless there are many things left unspecified in the typical abstraction and very often eventually you will care, which is why people say things like “all abstractions are leaky”.

For a simple example, think of writing something like this:

MessageBox(“hello world”, OkCancel)

MessageBox is an abstraction over a massive amount of logic. You specified a string and a set of buttons and not much else. You give up control over the styling, the placement of the buttons, the actual button text (very likely will be localized), where the box will appear, and so much more.

You are not getting a specific result. You are getting a result that meets the contract.

“Write a program that shows a hello world message box” is a much higher level of abstraction than even that, and you are giving up significant specificity and determinism. Is it a good abstraction? That’s a great question. But it certainly is an abstraction.


> This is not entirely true. A high level language defines some behaviors, leaving many behaviors to be undefined and implementation specific.

Sure, but you don't need to use those, and shouldn't.

A programming language let's you avoid undefined behavior and stick to the defined abstraction provided by the language.

Natural language does NOT let you do this, because words have no strict meaning, and the meaning of any sentence is undefined and up for interpretation and contextual clarification, etc, etc. Maybe more to the point LLMs are not concerned with meaning - they are concerned with continuation prediction. The LLM/agent that "ignored instructions" and deleted all the customer's data and backups wasn't "being bad" or "ignoring instructions", it was just statistically predicting, and someone was daft enough to feed those predictions into an execution environment where real world consequences could ensue.


> A programming language let's you avoid undefined behavior and stick to the defined abstraction provided by the language.

Yes and no. It lets you avoid undefined behavior traps. It lets you rely on endless implementation defined choices.

> Natural language does NOT let you do this, because words have no strict meaning, and the meaning of any sentence is undefined and up for interpretation and contextual clarification, etc, etc.

This is a fair criticism of natural language. It is less well defined. That doesn’t stop it from being an abstraction, though perhaps it fairly makes it a problematic abstraction.

> The LLM/agent that "ignored instructions" and deleted all the customer's data and backups wasn't "being bad" or "ignoring instructions"

That one was entirely human error. And not just “oops I trusted the AI too much”. That guy was sharing the DB volume across prod and staging so deleting the staging DB also took down prod.

If you run your business like that, eventually a human will do the same thing, because it’s catastrophically dumb.


> It lets you avoid undefined behavior traps. It lets you rely on endless implementation defined choices.

That's a strange way to think of a programming language specification.

A programming language is an abstraction that mostly fully specifies behaviors that any compliant implementation must adhere to, and that you as a user of the language can therefore rely on.

There may be a few details in a language specification that are specified as implementation defined, but that doesn't mean that are not specified, it just means they are specified by the implementation rather than the standard.


> A programming language is an abstraction that mostly fully specifies behaviors that any compliant implementation must adhere to, and that you as a user of the language can therefore rely on.

This is true but hand waves over a lot of behavior. An implementation of Python could be 10x faster or 10x slower and still be fully compliant with the specification.

There’s a reason that NumPy’s core is written in C and not Python. The implementation-specific details sometimes matter. The abstraction is leaky as soon as you care about anything not explicitly specified in the abstraction.


> “Write a program that shows a hello world message box” is a much higher level of abstraction than even that, and you are giving up significant specificity and determinism. Is it a good abstraction? That’s a great question. But it certainly is an abstraction.

But to who/what is it an abstraction?

To a human, sure. If I told an intern to "write a hello world message box", I'd expect at least to get something approximating that request.

To an LLM? The LLM has no intent or understanding - it's just a statistical predictor. Maybe it'll "interpret" your request as only wanting a hello world message box, so it'll delete your company's entire git repository to ensure a clean slate to start from.

I think that when you say "it is certainly an abstraction" what you implicitly mean is "it is certainly an abstraction TO ME", but an LLM is not you, and does not have a brain, so what is an abstraction to you shouldn't be taken as being an abstraction to an LLM (whatever that would even mean).


No, we can’t retreat to the “LLMs so dumb” position every time we discuss anything around them. This is not a rebuttal. It is an interesting thing to discuss on its own merit, but it’s not relevant here.

If a natural language specification is an abstraction over the code implementation, then it is an abstraction whether given to a human or an LLM. The LLM being arguably a bad tool does not change that.


So let's take LLMs and AI out of the discussion altogether.

The question is then can specifying a computer behavior in (inherently imprecise) natural language be considered as some sort of "program", a higher level abstraction in same sense as a HLL provides a higher level of abstraction to programming in assembler?

I would say no, for various reasons:

1) There is a difference between an abstraction and under-specifying your requirements

2) Whatever you want to call it, any descriptive language that is insufficiently precise to unambiguously describe the details that are important to you is not very useful as a way to specify system behavior

3) If you are not only using natural language to describe the desired behavior of a system, but are also assuming that the person (or thing) interpreting the description is bringing their own knowledge and expertise to bear in "fleshing out the details", then you don't have a full specification at all, even an abstract one. What you have in that case is not something analogous to a computer program, but more like business requirements.


I think this is all valid. What I think bears highlighting is that often you don’t need a 100% unambiguous specification. Very often “meets the business requirements” if totally sufficient and plenty of other stuff is either implicit or just doesn’t really matter.

If you need to develop an API that you’ll support publicly for 10+ years, yeah. You probably want to be really precise. If you need to code up yet another feature in some CRUD app, it matters a lot less.

Sometimes natural language is entirely sufficient to “unambiguously describe the details that are important”.


Sure, there are some cases where "business requirements is all you need", and certainly CRUD apps are the poster child for this.

However, no/low-code solutions for things like this have existed basically forever, way before AI (I remember "The Last One" from the 1980's that was going to make custom coding obsolete), and yet people still mostly prefer to hand code CRUD apps.

It's interesting to guess why this is the case. Is it just because writing code that is 100% boilerplate is so simple that automation has no value, or is it that even in this simplest of cases that there is in fact still some human taste/skill being applied so that the app doesn't just "meet business requirements", but also doesn't suck, and/or maybe seeing as the business wants someone to call and fix it when it inevitably stops working, you may as well have that person write it in the first place? No doubt there are other factors too.

I guess the multi-billion dollar question is how much of software development overall consists of CRUD-like "business requirements are all you need" type applications, vs how many need some actual software engineering, and time will tell how many companies end up choosing to use automation tools to write software, vs the CRUD situation where despite this having been possible for a long time, hardly anyone chooses to do so for a myriad of other reasons.


Can you explain how Python or C programs change from invocation to invocation?

Mostly because the behavior is implementation defined. So long as the behavior meets the contract, the compiler/interpreter is free to do whatever it wants.

Python could certainly optimize repeated code paths to make them more efficient. I don’t know that the standard implementation actually does, but it could. Spending extra time optimizing repeated code paths is a reasonable choice for an interpreted or JIT compiled language.

I would not expect C to change from invocation to invocation mostly because C is supposed to be boring and predictable. That’s kind of its thing. But again, it could. There’s nothing in the C spec I’m aware of that says the C compiler has to ensure that each invocation of a piece of code will execute the same machine instructions.


>So long as the behavior meets the contract, the compiler/interpreter is free to do whatever it wants.

Yes, and that's how it's supposed to be. Any description that determines the totality of a problem space is an implementation itself.

Imagine the following requirements:

f(0) = 0, f(2) = 4

Both f(x) = x^2 and f(x) = 2x are correct ways to implement said requirements. But if you start relying on f(1) = 2, you might get in trouble with a coworker that relies on f(1) = 1. This is undefined behavior and should be avoided.

>There’s nothing in the C spec I’m aware of that says the C compiler has to ensure that each invocation of a piece of code will execute the same machine instructions.

It can't, because C can be written for any system you want. If I ask the compiler to compile x *= 2, it might use the mul primitive or it might use shl, both are ok.


Ok but how does that change the behaviour of the program?

Depends on what you mean.

Assuming you write code that does not take advantage of undefined behavior, then in general you would expect the correctness of your program to be consistent. But you would not expect the performance, for example, to be consistent. An optimizing JIT compiler might certainly run the 3rd invocation of code path way faster than the first.


Could you do it cheaper?

The $10/month gets you storage, but also bandwidth and hosting and a bunch of tooling. Worth it? Probably so if you want something that mostly just works.


Me, individually, providing it to one customer? no.

At scale, providing it to tens of thousands? yes.

It's a perfectly fine price for a customer to pay and not worry about it, but it's not squeezed to extract every fraction of a cent because competition is so fierce. In a race to the bottom you'd expect the bottom to be approached, but it isn't.

Bluehost, Kinsta, WPEngine, GoDaddy etc, are marketing companies that sell webhosting, and they have very healthy margins. They compete on ads, not on price.


After a terrible, possibly criminal experience with a hoster, I ended up with: https://www.interserver.net/

I am very happy. The speed is insane. I always thought and was told that WP is the reason for a slow website. No, it was my host. I pay around 10 USD per month but I think the smaller plans starts lower.

For what it's worth, I am very happy with them. But I only host a few WP and FreshRSS. I think they support python too but for Django I use: https://www.pythonanywhere.com/ I pay 5 USD a month there but I think this plan is not sold anymore.


Gary Marcus here is making an argument about souls and just doesn't realize it. You could rewrite this whole post replacing "consciousness" with "soul" and it would flow almost the same.

He handwaves consciousness as "internal states" as if that means anything and as if an LLM has no internal state. (This seems to be the analog for "divine touch".) He can't define consciousness rigorously, partly because we don't at all understand consciousness, but also because any attempt to do so would allow a scientific response.


> What I am trying to say is that we can only agree something is conscious, and only if it's working on the same principles a human brain does, closely. It's an agreement, not proof, not definitions. We collectively start accepting it, without KNOWING. And the safest way to do that is on something which is working exactly like a human brain. Anything else we can only lose certainty.

This means that "consciousness" is simply a synonym for "human".

By that "agreement", sure, a machine cannot be conscious. But I don't think this is what most people mean when they talk about whether an LLM could be conscious. Because of course it's not human. So they must be asking something more interesting.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: