Hacker Newsnew | past | comments | ask | show | jobs | submit | nis0s's commentslogin

These types of frameworks never resolve the core problem with agents, which is they don’t really think and so are prone to getting stuck in infinite loops even if they’re wrong. I haven’t used this framework, but my guess is that Devil’s Advocate will be the most prone to this problem. But who knows.

Great idea, I would start by speaking with a trained educator at a university or similar.

Maybe also get some other people on board to create a certified program so if your program doesn’t work out for the student, they can get some credit for spending/wasting time with your group.

Other thing is safety, if you’re dealing with young people and involve other adults, you want proper and lawful mechanisms to protect the kids and yourself.

Besides that, teaching is a skill by itself, and teaching poorly can have the opposite of the intended effect.


If AI will be so capable that it will work flawlessly without human intervention, then why can’t replaced employees use AI employees to start their own companies, and set up competition to existing vendors of all kind. Retail, ads and marketing will be still be kings, and the next competition will be AI vs. AI.

Sure, one agent will be fine in that setting. But the dynamics and requirements change you have multiple agents who need to coordinate tool use and task assignments.


beads solve that and they use CLI not MCP.


Opensource models exist

Are they _so much_ cheaper to run that they could be used to initiate thousands of "human-like" interactions at negligible costs compared to what the interlocutors will incur?

(I genuinely don't know )


A sufficiently motivated adversary will have the hardware to run the biggest open source models on prem. The only costs are then electric bills.

Only Amp is dead. The coding agent/tools are fine, you don’t see Cursor shutting down its doors.

Cursor will not survive very soon and in reality, they are in deep trouble.

With very tight margins, Cursor is just paying their own competitors (Anthropic's Claude Code, OpenAI's Codex and Google Gemini) for the tokens to destroy their business.


There is a need for an orchestration-ready IDE, but the entire ecosystem of cost per token is unsustainable for the amount of work that needs to be handled. Open source models and tools are the only solution.

From reading about it, it seems to me Claude is a very limited agent, but it’s optimizing how to get a high score on leaderboards. I suspect most people don’t realize their interactions with Claude, both via web app and API, are used for training, regardless of your account subscription status. But yes, you can opt out.

I think it's equally likely that the property management company here has an incorrectly configured S3 bucket (or something like it) that has unintentionally exposed a bunch of leases. It makes more sense to me that a directory of hundreds or thousands of nearly-identical leases would be exposed online and scraped than the possibility that someone uploaded enough lease documents to Claude for them to all be included in training data. I'd be really surprised, actually, if any major AI company was taking uploaded documents and using them for training, since they're very, very likely to contain extremely sensitive data.

There’s no such thing as outpacing AI

so far so good:)

We don’t even have proper self-driving cars yet.


I am guessing you mean AGI? They usually mean they’re not generally intelligent, i.e., displaying cognitive flexibility for different types of tasks without training or extensive fine-tuning. Think of a human intern, you don’t need to tell that human intern beyond a simple phrase when you need something. The intern will figure out how to do it, including figuring out what they don’t know, and what they need to learn to do that thing.

https://en.wikipedia.org/wiki/Artificial_general_intelligenc...


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: