Hacker Newsnew | past | comments | ask | show | jobs | submit | SilverSlash's commentslogin

I hadn't heard of TanStack but a quick look at their website doesn't inspire confidence tbh. I mean, just take "TanStack Pacer".

It provides such things as:

```

import { Debouncer } from '@tanstack/pacer' // class

const debouncer = new Debouncer(fn, options)

debouncer.maybeExecute(args) // execute the debounced function

debouncer.cancel() // cancel the debounced function

debouncer.flush() // flush the debounced function

```

Why? Just why do you need to install some "framwork" for implement debouncing? Isn't this sort of absurdism the reason why the node ecosystem is so insecure and vulnerable in the first place? Just write a simple debouncer using vanilla js...


Not to feed the trolls, but you responded to a comment about TanStack Start (a full-stack metaframework) by denigrating @tanstack/pacer -- a separate, niche utility published by the same team.

You're entitled to your opinions, but I'm happy to defend the rationale of leveraging battle-hardened, rigorously-tested, open-source, type-safe libraries instead of DIY cowboy vanilla js spaghetti.


Obviously it's more than just debouncing. https://tanstack.com/pacer/latest/docs/overview

That's putting it mildly!

TanStack started out by providing a very good JS table library. Now they offer a Router, and some more libs. They are definitely an up and coming name in the JS space.

That's... not quite right.

[EDIT] I typed "Router" when I meant "Query".

TanStack Query is the relatively newer name for React Query -- one of the most popular JS libraries of all time.

TanStack Start is a recent metaframework (and the one w/ the brightest future, IMO), but Tanner and team have profoundly significant bona fides. IOW, the dev team is far from being the "new kids on the block".


Do you have a source for TanStack Router being a newer name for React Router? Doesn't seem like it when looking at the sites for both projects.

Are you thinking of the whole Remix/ReactRouter thing?


(facepalm)

Thank you, but no. I typed "Router" when I meant "Query". TanStack Query is the newer name for the library FKA react-query.

TanStack Router is an alternative to React Router.

TanStack Start is an alternative to Remix/react-router-7's framework mode.

The naming history and evolution of react-router and its relationship to Remix is a bit convoluted, but an unrelated tangent to the point I was making.


Nah, they are the next toy for the magpie developers, now that Next.js is no longer cool.

That's... not quite right :)

React Router, which belongs to Remix, which was acquired by Shopify, is here: https://github.com/remix-run/react-router

Tanstack Router is an entirely new router.


Thanks yes I know, I typed "Router" meaning "Query", noted in a peer comment. sigh.

As in, htmx is better? I haven't used it but last I looked into it I was extremely confused as to whether it was a meme, an actual framework, or both.

HTMX is great when your web interface is just a representation of a server state.

If web interface is an application backed by a remote state HTMX falls apart.


> If web interface is an application backed by a remote state

What does that mean?


can you give an example?

None of the above. It is a utility (I guess framework maybe) for a feature that was cool in ASP.NET back in 2005. But that is it's charm. It is just JS swapping out the dom for you.

Not sure what you're thinking of, but the first release of HTMX was 2020. Its predecessor, intercooler, was first released in 2013.

Yep but the idea goes back further. Memory is vague but this may have been it. https://www.ajaxtoolkit.net/DynamicPopulate/DynamicPopulate....

A lot of the LLMs are very familiar with next.js and vercel is also aggressively building an ecosystem around their tooling for LLMs. So I wonder if this problem will only be exacerbated when everyone using LLMs is strongly nudged (forced) to use next?

When you create a Next.js project from Vercel's template, you get an AGENTS.md that literally says "THIS IS NOT THE NEXT.JS YOU KNOW"

Is that because LLMs default to the older pages router? Or are they actually providing a different version of the library optimised in some way for agents?

I think they just want LLMs to read the docs they began shipping[0] along with the library instead of using their own knowledge. For example, when I used Next.js a few months ago, models kept using cookies() and headers() without await, because that's how older Next.js versions worked, but modern Next.js requires await. I imagine there are more cases like this.

[0]: https://nextjs.org/docs/app/guides/ai-agents#how-it-works


One rather prominent case would be Tailwind. v4 made breaking changes in the way Tailwind is set up, requiring different packages and syntax. However, if you ask an LLM how to set up Tailwind on your Vite & React app, it will confidently list the setup steps for Tailwind v3, which no longer work.

At times I would see people daily asking for help with their broken Tailwind setups, and almost always it was them trying to use Tailwind v4 the v3 way because some AI told them so.


This was so unbelievably obnoxious when I first started trying to use Cursor last year at some point. Also because if you tried to not use tailwind the AI would eventually try to force it in anyway. I don’t know how it is nowadays but that was so frustrating and funny at the same time. And! When I setup Tailwind v4 ahead of time, got it working, and told the AI about the v4 changes, it would “correct” it to v3 anyway. Another fun “metric” was to ask an AI how to setup react because it was still recommending create-react-app though nowadays I’m sure it’ll be harder to find any model that still has that in its training set.

We've had shitty bloated websites before LLMs were a thing.

Yeah but LLMs are trained on a majority of shitty bloated things, that's kinda why their output is garbage but workable.

Someone needs to make a compilation of all these classic OpenAI moments. Including hits like GPT-2 too dangerous, the 64x64 image model DALL-E too scary, "push the veil of ignorance back", AGI achieved internally, Q*/strawberry is able to solve math and is making OpenAI researchers panic, etc. etc.

I use Codex btw, and I really love it. But some of these companies have been so overhyping the capabilities of these models for years now that it's both funny to look back and tiresome to still keep hearing it.

Meanwhile I am at wits end after NONE OF Codex GPT-5.4 on Extra High, Claude Opus 4.6-1M on Max, Opus 4.6 on Max, and Gemini 3.1 Pro on High have been able to solve a very straightforward and basic UI bug I'm facing. To the point where, after wasting a day on this, I am now just going to go through the (single file) of code and just fix it myself.

Update: some 20 minutes later, I have fixed the bug. Despite not knowing this particular programming language or framework.


> I am now just going to go through the (single file) of code and just fix it myself.

That's front page news, in this era.


I understand how laughable that sounds when I say it out loud. But the reality is, when I'm in a state of 'Tell LLM what to do, verify, repeat', it's really hard to sometimes break out of that loop and do manual fixes.

Maybe the brain has some advanced optimization where once you're in a loop, roughly staying inside that loop has a lower impedance than starting one. Maybe that's why the flow state feels so magical, it's when resistance is at its lowest. Maybe I need sleep.


> it's really hard to sometimes break out of that loop and do manual fixes

You're aware of the MIT Media Lab study[0] from last summer regarding LLM usage and eroding critical thinking skills...?

[0] Your Brain on ChatGPT: Accumulation of Cognitive Debt when Using an AI Assistant for Essay Writing Task June 2025 DOI:10.48550/arXiv.2506.08872


>> it's really hard to sometimes break out of that loop and do manual fixes

it's not just an erosion of skills, it can also break the whole LLM toolchain flow.

Easy example: Put together some fairly complicated multi-facet program with an LLM. You'll eventually hit a bug that it needs to be coaxed into fixing. In the middle of this bug-fixing conversation go and ahead and fire an editor up and flip a true/false or change a value.

Half the time it'll go un-noticed. The other half of the time the LLM will do a git diff and see those values changed. It will then proceed to go on a tangent auditing code for specific methods or reasons that would have autonomously flipped those values.

This creates a behavior where you not only have to flip the value, the next prompt to the LLM has to be "I just flipped Y value.." in order to prevent the tangent that it (quite rightfully in most cases) goes off on when it sees a mysteriously changed value.

so you either lean in and tell the llm "flip this value", or you flip the value yourself and then explain. It takes more tokens to explain, in most cases, so you generally eat the time and let the LLM sort it.

so yeah, skill erosion, but it's also just a point of technical friction right now that'll improve.


This was a great comment. I don't know if it's common knowledge, but this really helped clarify how the shift happens.

I also remember half coding and half prompting a few months back, only to be frustrated when my manual changes started to confuse the LLM. Eventually you either have to make every change through prompting, or be ok with throwing away an existing session and add back in the relevant context in a fresh one.


When I have to pop in and solve a problem, I tell it I fixed it and what was wrong.

Depending on the depth of its misunderstanding it could become a memory note or a readme update. I haven’t had any real issues with that approach.


It sucks that you have to do this.

I'm not yet at the point where I'm comfortable with just vibe coding slop and committing to source control. I'm always going in and correcting things the LLM does wrong, and it really sucks to have to keep a mental list of all the changes you made, just so you can tell your Eager Electronic Intern that you made them deliberately and to not undo them or agonize over them.


Every time I change something outside the chat interface claude tells me a linter made a change.

> But the reality is, when I'm in a state of 'Tell LLM what to do, verify, repeat', it's really hard to sometimes break out of that loop and do manual fixes.

My experience is rather that I am annoyed by bullshit really fast, so if the model does not get me something that is really good, or it can at least easily be told what needs to be done to make it exceptional, I tend to use my temper really fast, and get annoyed by the LLM.

With this in mind, I rather have the feeling that you are simply too tolerant with respect to shitty code.


I have the same problem. I had lines directly in front of me where I needed to change some trivial thing and I still prompted the AI to do it. Also for some tasks AI are just less error prone and vice versa. But it seems the context switch from prompting to coding isn't trivial.

I think it’s called "sunk cost fallacy".

"The last output is so close to exactly what I wanted, I can't not pull the machine's lever a few more times to finally get the jackpot..."

> Maybe the brain

…is already damaged by reliance on AI.


And that’s exactly why I’ve stopped using llm’s entirely.

People who are using them frequently: you’re delusional if you think your brain is not harmed. I won’t go into great detail because I can’t be bothered and I’m sure this post will be down voted - but - I can share my own experience. Ever since I stopped using them my ability to focus, think hard and hold concepts in my brain and reason about them has increased immensely. Not only that but I re-gained the conditioning of my brain to ‘deal with the pain’ that comes with deep thought - all of that gets lost by spending too much time interacting with llm’s.


Thank you for the belly laugh.

Are you sure they are not just refusing to solve your UI bug due to safety concerns? They may be worried you'll take over the world once your UX becomes too good.

> a very straightforward and basic UI bug

Show us the code, or an obfuscated snippet. A common challenge with coding-agent related posts is that the described experiences have no associated context, and readers have no way of knowing whether it's the model, the task, the company or even the developer.

Nobody learns anything without context, including the poster.


A pretty easy way to construct a bug that is easy for a human to solve but difficult for an AI is to have it to do something with z-indexes. For instance, if your element isn't rendering because something else is on top of it, Claude will struggle, because it's not running a browser, so the only way it could possibly know there was a bug would be to read every single CSS and HTML file in your entire repo. On the other hand, a human can trivially observe the failure in a browser and then fix it.

This is a pretty simple thing, but you can imagine how CSS issues get progressively more difficult for AIs to solve. A CSS bug can be made to require reading arbitrarily much code if you solve by only reading code, but by looking at relatively few elements, if you look at the page with your eyes.

This can be somewhat solved by hooking up a harness to screenshot the page and feed it into the AI, but it isn't perfect even then.


That's hard to believe in my case. I tried a variety of prompts, 3 different frontier models, provided manual screenshot(s), the agent itself also took its own screenshots from tests during the course of debugging. Nothing worked. I have now fixed the bug manually after 15-20 minutes of playing around with a codebase where I don't know the language and didn't write a single line of code until now.

What's hard to believe? OP just asked what the bug was.

> after wasting a day on this, I am now just going to go through the (single file) of code and just fix it myself.

Seriously, you wasted a whole day just so you wouldn't have to look at a single file of code?

> Update: some 20 minutes later, I have fixed the bug. Despite not knowing this particular programming language or framework.

Be really careful there, you might have accidentally learned something.


It is entirely plausible they were just experimenting with AI tooling to better understand how to use it and what it is capable of. Their saying, 'Despite not knowing this particular programming language or framework.' indicates to me this is probably the case.

Nope. I've been working on this project for a couple of days now and things were mostly going well. A significant portion of the mvp backend and frontend was built and working. Then this one seemingly simple bug appeared and just totally stumped both Codex and Claude Code.

There was even another UI component (in the same file) which was almost the same but slightly different and that one was correct. That's what I copy pasted and tweaked when I fixed the problem. But for some reason the models were utterly incapable of making that connection.

With Codex and Claude Code I thought maybe because these agentic coding tools are trained to be conservative with tokens and aggressively use grep that they weren't looking at the full file in one go.

But with Gemini I used the web version and literally pasted that entire file + screenshots detailing what was wrong (including the other component which was rendering correctly) and it still couldn't solve it. It was bewildering.


I had the exact same issue, I had a UI scrollbar bug that claude couldn't fix, it tried 4-5 different ideas that it was sure was causing the issue, none of them worked.

Tried the same with codex, it did a little better but still 4x times around.

This is with playwright automation, screen shots, full access, etc..


I told my manager I wrote my code line by line (most of it) in a check-in. I showed him @author my name, and we laughed for a bit.

But I think that is the best way to have a clear mental model. Otherwise, no matter how careful, you always have tech debt building and churning.

Also they really suck at UI bugs and CSS. Unit test that stuff.


I had a problem that required a recursive solution and Opus4.6 nearly used all my credits trying to solve it to no avail. In the AI apocalypse I hope I'm not judged too harshly for my words near the end of all those sessions lol.

yeah they all suck at ui. have you given it a feedback loop? update code, screenshot, read image repeat etc. that's the best i've found as long as tokens aren't a concern

Haven't you heard? "Coding is solved."

> I am now just going to go through the (single file) of code and just fix it myself.

You can't it's all vibed, you'll face the art vs build internal struggle and end up re-coding the entire thing by hand.


I'm deeply regretting paying for this service right now. There is some gaslighting going on in that issue that it's because of the 1M context model. I am using the non-1M context model and it's still disastrously bad.

Umm, I think you have it reversed. A helium plant in Qatar shutting down causing problems to US chip consumers is precisely because of globalization.


So what, you expect 195 countries to develop a copy of everything just so they can hate the other 194?

Connection and collaboration is always the better way forward.


Redundancy is not "hate", it's robustness against failure. Complex, interconnected supply chains are fragile. If you're building an online storage API, you would (I hope) consider it deeply irresponsible not to setup plenty of followers of the main system for redundancy and automatic fail over. The idea that supply chains should not do this, or that any such suggestions are emotionally driven decisions is extremely bizarre.

Yes, this may increase costs slightly because robustness necessarily has a cost associated with it.


It's amazing how pretty much every reply to my original comment has failed to comprehend that I was not criticizing globalization. Whether it's even possible to get to where we are without it is also debatable.

But the question you're asking me is meaningless, because the premise is wrong. My original reply was true and entirely independent of my or anyone else's opinion of whether globalization is good/bad.


Your comment was technically true but I don't get the point of it.


It's hard to imagine chip supply chain could be commercially viable without globalization.

One could probably argue that giving up globalization means fewer and less capable products.


Good, we agree :)


You can have a 1960s isolated all in country production line if you're happy with the products of the 1960s.


It's a little weird that you seem to think that humans can't make technological progress without globalization.


Pretty sure the helium plant is in Qatar because the helium is in Qatar, not because of globalization.


Interestingly, it actually regressed on Terminal Bench 2.0.

GPT-5.4: 75.1%

GPT-5.3-Codex: 77.3%


Their "API" isn't what's being accessed here. As far as I understand it's using their subscription account oauth token in some third party app that's the issue here.


If they allowed oauth token to work like that then that is their (Google's) problem.


It is basically impossible to disallow the token to work that way on a technical level. It would be akin to trying to trying to set up a card scanner that can deny a valid card depending on who is holding it. The only way to prevent it from working is analyzing usage patterns/details/etc in some form or fashion. Similar to stationing a guard as a second check on people whose cards scan as valid.


Exactly, so charge on usage or cap on usage.

Either the token works for all times, or works until it doesn't, or does not work at all.

Punishing the account for using a token you have vended for the exact same purpose is extremely poor product design.


So it sounds like the trillion dollar corporation can actually do it but they don't want to spend the money too because they are extremely cheap?


Well, it looks like they're not allowing it.


Same. Cannot find it in that thread and I would like to know the source too.


What's "tfa"?


The Fine Article.

It's a reference to "RTFM" = Read the F'ing Manual.


You couldn't Google this?

I mean, even ChatGPT is capable of doing that.


> TFA most commonly refers to Trifluoroacetic acid, a highly persistent, mobile "forever chemical" (PFAS) found globally in water and soil, widely used in organic chemistry as a solvent.


When I searched for "its in the tfa meaning" this was my third result on Duck Duck Go:

https://news.ycombinator.com/item?id=19781756

When I searched for "tfa internet meaning", The fifth result looked helpful so I clicked it, and it was:

https://www.noslang.com/search/tfa

Searching the internet wasn’t hard before AI, and it isn’t hard today.


I just googled "what is tfa", and none of the results on the first page were related to the current topic.


But surely your search engine must have given you the answer within your first three clicks, if not, perhaps you should consider a better search engine.


Try “TFA acronym Internet forums”.


"hackernews TFA" get better search skills.


You must be one of those “AI can’t possibly make anyone more productive” folks.


Don’t know about your parent, but I am certainly on of those “AI can’t make anyone more productive”.

Well, at least I would say that while being a bit hyperbolic. But folks like us who prefer to see claims by corporations trying to sell you stuff backed by behavioral research before we start taking the corporation’s word for it.


The irony is that web searches for an explanation of something often lead to a discussion thread where the poster is downvoted and berated for daring to ask people instead of Google. And then there's one commenter who actually actually explains the thing you were wondering about.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: