Hacker Newsnew | past | comments | ask | show | jobs | submit | konaraddi's commentslogin

QQQ rebalances on a schedule. Existing holders are affected because the fund’s underlying composition will change.

This. If you are invested in a Nasdaq index (e.g. QQQ), it will have to sell some of the tail and buy the necessary weighted percentage of Snake Oil. Apart from you buying snake oil, you will realise some extra capital gains/loses due to the rebalancing.

And to be clear it's not just QQQ; countless retirement target date funds have a Nasdaq component. That's the real target of this grift, your retirement fund.

My understanding: It depends on what index the fund is tracking. QQQ tracks the Nasdaq-100 so QQQ is vulnerable. VT tracks the FTSE Global All Cap Index so VT is not directly affected by Nasdaq’s choices but is still exposed to some extent because spacex is likely going to be in the aforementioned FTSE index, Nasdaq’s actions impact spacex’s market cap, and thus Nasdaq’s actions impact spacex’s position in the aforementioned FTSE index which in turn affects VT’s composition (to a smaller extent than QQQ’s).

EDIT: to be clear the above are just examples with two funds (QQQ and VT)


FTSE Russell is proposing changes similar to Nasdaq, with the consultation ending 18 March.

VIFAX?

I think it’d be a rinse and repeat of the line of thinking for VT but more exposure than VT.

From VIFAX fund’s description on vanguard:

> The fund offers exposure to 500 of the largest U.S. companies


Based on the comment from [1] it seems like the issue with nasdaq is that anyone tracking it is contractually obligated to include spacex? What about for other funds? VIFAX description says

>The Global Equity Index Management team applies disciplined portfolio construction and efficient trading techniques designed to help minimize tracking error and maintain close alignment with benchmark characteristics [of S&P 500].

So given that this only affects NASDAQ i'm guessing they aren't affected? And even if S&p 500 started to play the same games, why can't their supposedly disciplined "Global Equity Index Management team" simply opt not to play along with these shenanigans? Or if they simply do mechanically track the s&p 500, what exactly is the "management fee" paying for?

[1] https://news.ycombinator.com/item?id=47394355


There’s a lot to address here but in short: VFIAX is an index fund, it tracks the S&P500 index, it’s not actively managed, SpaceX will likely be in the S&P500, so my comment around VT applies to VFIAX (as far as the question of exposure is concerned) but to a greater extent than VT (see VT’s composition vs VFIAX’s composition).

Obligatory not financial advice, I’m not an expert, don’t make any financial decisions based on hacker news comments, etc


I work at aws and generally use Claude Opus 4.6 1M with Kiro (aws’s public competitor to Claude Code). My experience is positive. Kiro writes most of my code. My complaints:

1. Degraded quality over longer context window usage. I have to think about managing context and agents instead of focusing solely on the task.

2. It’s slow (when it’s “thinking”). Especially when it’s tasked with something simple (e.g., I could ask Claude Opus to commit code and submit for review but it’s just faster if I run the commands myself and I don’t want to have to think about conditionally switching to Haiku / faster models mid task execution).

3. It often requires a lot of upfront planning and feedback loop set up to the extent that sometimes I wonder if it would’ve been faster if I did it myself.

A smarter model would be great but there are bigger productivity gains to be had with a good set up, a faster model, and abstracting away the need to think about agents or context usage. I’m still figuring out a good set up. Something with the speed of Haiku with the reasoning of Opus without the overhead of having to think about the management of agents or context would be sweet.


The context degradation problem gets much worse when you have multiple agents or models touching the same project. One agent compacts, loses what it knew, and now the human is the only source of truth for what actually happened vs what was reported done. If that human isn't a coder, they can't verify by reading the source either.

I've been working on this and landed on a pattern I call a "mechanical ledger", basically a structured state file that sits outside any context window and gets updated as a side effect of work, not as a step anyone remembers to do. Every commit writes to it, every failed patch writes to it, every test run writes to it. When a session starts (or an agent compacts), it reads the ledger and rebuilds context from ground truth instead of from memory.

Its not a novel idea really, its basically what ops teams do with runbooks and state files, but applied to the AI agent handoff problem. The interesting bit is making the updates mechanical so no agent can forget to do it.



Good shout, there's definitely overlap. Beads looks like it's solving the task planning and dependency tracking side very well. What I ended up building is more about ground truth state. It doesn't track tasks, it tracks what actually happened: which commit we're on, what tests passed, which patches failed, whether the different machines used are in sync. So when a new session starts with a completely different agent, it doesn't have to trust anyone's memory, it just reads the ledger. Probably complementary honestly. Both would potentiallybe ideal if you're running multi-agent across machines.

> A smarter model would be great but there are bigger productivity gains to be had with a good set up, a faster model, and abstracting away the need to think about agents or context usage. I’m still figuring out a good set up. Something with the speed of Haiku with the reasoning of Opus without the overhead of having to think about the management of agents or context would be sweet.

I was thinking about this recently. This kind of setup is a Holy Grail everyone is searching for. Make the damn tool produce the right output more of the time. And yet, despite testing the methods provided by the people who claim they get excellent results, I still come to the point where the it gets off rails. Nevertheless, since practically everybody works on resolving this particular issue, and huge amounts of money have been poured into getting it right, I hope in the next year or so we will finally have something we can reliably use.


Context degradation is a real problem.

> We've effectively had that here with the ACA, where the government has decided that it will cover the first $800 or so dollars of your health insurance. What happened? Magically, the cost of health insurance increased by $800.

I don’t think that’s an accurate description of ACA [1], it didn’t lead to a dollar to dollar increase in premiums (share a citation if otherwise), and it’s a bit misleading to say it led to an increase in premiums because plans pre-ACA were effectively inaccessible to and lacking in benefits for impoverished people or people with pre-existing conditions.

[1] Here’s a brief description of ACA from Wikipedia:

> The act largely retained the existing structure of Medicare, Medicaid, and the employer market, but individual markets were radically overhauled.[1][11] Insurers were made to accept all applicants without charging based on pre-existing conditions or demographic status (except age). To combat the resultant adverse selection, the act mandated that individuals buy insurance (or pay a monetary penalty) and that insurers cover a list of "essential health benefits". Young people were allowed to stay on their parents' insurance plans until they were 26 years old.


There will never be a cited reason for increases, but here's 2023 where basically all insurers filled for a 10% increase in premiums. [1]

Since the 2022 covid bill which significantly increased the subsidization of premiums, health insurers have found various reasons to increase their premiums by inflation beating numbers.

That's obviously a "the market will bear it" situation.

The ACA was a big bill that did a lot. I'm not talking about all of it, but rather the premium subsidization along with the covid premium increase which both expired in 2026.

Look, the premiums expiring was bad. IDK if that was clear from my earlier comment. But there's a fundamentally unaddressed issue with insurers in general where they charge not based on competition or the cost of service, but based on what consumers can bear. Profit incentives for healthcare in the US are completely misaligned with providing good general healthcare. The ACA premiums are a bandaid over an artery laceration. Better than nothing, but that thing is going to very quickly start bleeding through. You can keep slapping on band-aides, but ultimately you'll be looking at more damage if you don't just address the issue.

[1] https://www.healthsystemtracker.org/brief/an-early-look-at-w...


I booted chromeos flex on a >12 year old laptop earlier this year and had a good experience with it. I wrote a bit about it here https://konaraddi.com/writing/2026-01-01-chromeos-flex/ (tl;dr tried to use fedora at first but no luck with WiFi out of the box then I used chromeos flex and it worked out of the box)

Speaking for myself : it's a bit creepy and unsettling. Using brain cells is probably inching closer to consciousness than today's silicon is, and consciousness isn't well understood so I'd fear this line of research could eventually lead to the "I have no mouth and I must scream" the other commenter referenced. Many decades from now we might be wondering how much of a human brain needs to be grown in a lab before it's considered unethical.

Is that an issue only because these neurons are biological (still artificial because they are lab grown)? Silicon neurons could also become more powerful and lead to the "I have no mouth and I must scream". In fact, top tech companies are investing 100s of billions of dollars they year to make their silicon neurons more powerful.

https://odap.knrdd.com/

A site for anti patterns in online discourse.

Example: https://odap.knrdd.com/patterns/strawman-disclaimer

Need to gather more patterns then create tooling around making it easier to use.

The goal is to raise the quality of comments/posts in forums where the intent is productive discussion or persuasion.


I think that’s an awesome idea and I like that it proactively gets ahead of the problem instead of the retroactive approach like moderation today. I’m interested in a very similar goal; I’ve been working on a guide on anti patterns in internet discourses at https://odap.konaraddi.com in hopes of it being used to make discourse on the internet more productive and pleasant (the guide is a work in progress).


Thankyou -- and wow that looks an amazing site. We desperately need more pleasant discourse (I think HN in general is a great example of good discourse, by and large) and I feel like you've codified some excellent rules.


Holy Makeral -

Your site is fantastic! Well done!


I think a significant distinction between your approach and Claude’s approach is that your approach requires allowing your machine to accept inbound connections but Claude’s approach does not. Claude probably went with the latter to avoid a whole class of security issues and mitigate risk of users having their machines compromised. I’m not familiar with what the new vectors of attack are with Claude’s approach though.


I hope this succeeds and isn’t backdoored


It's a pretty obvious honeypot. They're promising privacy even though they can't realistically provide it. The whole thing has ties with American surveillance companies. It's Operation Trojan Shield all over again.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: