Hacker Newsnew | past | comments | ask | show | jobs | submit | personjerry's commentslogin

Not mine. I just think it's ridiculous.

At some point I've wondered if "fiduciary duty", when pushed to highest corporate levels, always conflicts with "make the world a better place"

i.e. Fiduciary Duty Considered Harmful


Had 1000 of each resource, lots of income, landing pads and habitats, but never got more colonists?


Same, I'm stuck at 18 inhabitants.


I'm stuck at 0 lol


This seems like AI slop?

There's not a single real example, and it even has all the em-dashes intact.


> Over nearly 2,000 Claude Code sessions and $20,000 in API costs

Well there goes my weekend project plans


well, you can use jules and spend zero dollar on it. I also create similiar project like this, c11 compiler in rust using AI agent + 1 developer(https://github.com/bungcip/cendol). not fully automated like anthophic did, but at least i can understand what it did.


> ... has sunset

No accountability in the language

No rationale

No fucks given

What was the point of this post?


xAI owns Twitter... So now space company owns Twitter? Wtf


This does nothing.

I'll just start a business that mails letters to companies for you.

Now, an APPLICATION FEE, that's interesting. Hmm.


This does nothing.

I'll just start a business that lends money to job applicants. Apply now, pay later (ANPL).


If you continue to get mountains of slop applications after introducing an application fee, then at least you have a new revenue stream.


On the company side, you have a new revenue stream. On the ANPL side, you have another product you can securitize. Revenue generation and risk transfer, a win-win!


Well, presumably your business charges something to mail out job applications to companies? Like an application fee, that charge incurs a cost to applicant which will do something (presumably reduce applicant volume).


Plus by making that fee optionally replaced with time spent writing the letter, people who don't have the finances to pay a whole bunch of application fees can still apply for as many jobs as they're willing to put in the time.


The barrier of entry has gone up from nearly nothing to signing up (and presumably paying for) your service. This is a significant increase, which will simnifically decrease BS applications.


So www.postgrid.com?


I'm waiting for the yearly "Waterloo is the Silicon Valley of Canada" article that always fails to deliver its promise.

I grew up in Waterloo but it's just not it lol.


I don't really understand how this differentiates against the competition.

> Independence

Any "agent" running against code review instead of code generation is "independent"?

> Autonomy

Most other code review tools can also be automated and integrated.

> Loops

You can also ping other code review tools for more reviews...

I feel like this article actually works against you by presenting the problem and inadequately solving them.


> Independence

It is, but when a model/harness/tools/system prompts are the same/similar in the generator and reviewer fail in similar ways. Question: Would you trust a Cursor review of Claude-written code more, less, or the same as a Cursor review of Cursor-written code?

> Autonomy

Plenty of tools have invested heavily in AI-assisted review - creating great UIs to help human reviewers understand and check diffs. Our view is that code validation will be completely autonomous in the medium term, and so our system is designed to make all human intervention optional. This is possibly a unpopular opinion, and we respect the camp that might say people will always review AI-generated code. It's just not the future we want for this profession, nor the one we predict.

> Loops

You can invest in UX and tooling that makes this easier or harder. Our first step towards making this easier is a native Claude Code plugin in the `/plugins` command that let's Claude code do a plan, write, commit, get review comments, plan, write loop.


Independence is ridiculous - the underlying llm models are too similar on their training days and methodologies to be anything like independent. Trying different models may somewhat reduce the dependency, but all have read stack overflow, Reddit, and GitHub in their training.

It might be an interesting time to double down on automatically building and checking deterministic models of code which were previously too much of a pain to bother with. Eg, adding type checking to lazy python code. These types of checks really are model independent, and using agents to build and manage them might bring a lot of value.


> Would you trust a Cursor review of Claude-written code more, less, or the same as a Cursor review of Cursor-written code?

You're assuming models/prompts insist on a previous iteration of their work being right. They don't. Models try to follow instructions, so if you ask them to find issues, they will. 'Trust' is a human problem, not a model/harness problem.

> Our view is that code validation will be completely autonomous in the medium term.

If reviews are going to be autonomous, they'd be part of the coding agent. Nobody would see it as an independent activity, you mentioned above.

> Our first step towards making this easier is a native Claude Code plugin.

Claude can review code based on a specific set of instructions/context in an MD file. An additional plugin is unnecessary.

My view is that to operate in this space, you gotta build a coding agent or get acquired by one. The writing was on the wall a year ago.


> It is, but when a model/harness/tools/system prompts are the same/similar in the generator and reviewer fail in similar ways.

Is there empirical evidence for that? Where is it on an epistemic meter between (1) “it sounds good when I say it”, and (10) “someone ran evaluation and got significant support.”

“Vibes” (2/3 on scale) are ok, just honestly curious.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: