Hacker Newsnew | past | comments | ask | show | jobs | submit | yfw's commentslogin

Have you not seen the principals and seniors being offered the door or buyouts?

Sensitivity vs specificity

Can easily seo the knwlege chain or seo poison the sources

So realistically no agi

By all accounts, we're 2 years away from AGI, every year.

Its like fussion power, except there we half the funding every year instead of doubling it

Fusion power is proven to be possible.

AGI is not.


AGI is 100% possible, even if the current breed of transformer-based models are not it, and even if silicon is not it. There's nothing special about human brains that we won't eventually be able to match (and then exceed) in vitro. We are living proof that intelligence can be built out of matter, and that human-scale intelligence can run on 20 watts. It's not a matter of if, but when.

There is (eventually) no more profit to be made on energy when energy becomes virtually limitless.

There is (still) a lot of profit to be made on half-baked semi-AGI prospects.


It's not like the machines will ever be free, just the fuel. And it's not like the price of energy will go to zero, just be cheaper. To drive down the price of energy you first need to be taking a large slice of a trillion dollar pie.

If fuel or any other form of energy becomes virtually limitless and free, any form of matter will eventually also be kinda limitless and free. Could take longer than humanity will ever last though.

In the 'short' and current term there is still lots of money to be made in fuel indeed, but advancements in fossil free energy could make a real shift.


That's ok, that's when you change the definition of AGI and claim success!

There is not even an agreed upon definition for intelligence or for AGI.

> Most of everything tends to suck. Most projects go nowhere, most companies fail, most scientific papers are garbage.

Umm, whats your point? We arent spending 1.4t on other shitty things that are tipping to fail


Right so lets fire them all especially the ones with domain knowlege

Companies might bet that it is safer to base their businesses on more fungible explicated domain knowledge rather than knowledge that is siloed in human brains.

I guess you could hire people to work it out or ai to hallucinate it

Could also sabotage others


Sure, but it doesn't really change motivations around that. Shady politicians might do that anyway, bet or no bet.

The problem with sabotaging yourself is that it challenges the assumption that everyone is playing to win. If anything, supporting other candidates might be questionable, if you bet on yourself losing.


Read Careless People. The fish rots from the head


None of those employees can claim ignorance as to who they work for. They made their bed, now they can lie in it.


If ai benefitted everyone and not just the billionaires we would be viewing it differently.


That's a truism. But it ignores The Iron Law of Oligarchy, Pareto Principle, and dozens more that remind us that power tends towards centralization. It's currently fashionable to call out the billionaires, but if you removed them, they'd just be replaced by corrupt government officials, or something else.

That's not to say we should just throw up our hands and accept every social injustice. But IMHO we shouldn't go around simplistically implying that all social ills will be solved by neutering the billionaire class.


More importantly we shouldn't deny the rest of humanity benefits on the basis that the majority of the benefit accrues to the powerful. We should strive to change the distribution pattern, not remove the benefit.


>we shouldn't go around simplistically implying that all social ills will be solved by neutering the billionaire class.

Not to put too fine a point on it but this was basically how the Japanese post war economic miracle was achieved.

In this case it was America which ordered the Japanese oligarchy to be stripped of its wealth.

We've had decades of propaganda telling us that this is the worst thing we could do for economic growth though so it's natural to doubt.


The problem with billionaires is that they are able to hoard so much money by exploiting others. We would be much better off if billionaires weren't given so much advantage by Capitalism as those resources would be much more useful if distributed.

The biggest problem we currently have with billionaires is that they are now so rich that the world becomes like a game to them and some of them are deliberately pushing us to a dystopia where non-billionaires become functional slaves (c.f. Amazon workers).


“But IMHO we shouldn't go around simplistically implying that all social ills will be solved by neutering the billionaire class.”

You’re right. Instead of implying, we should be taking active steps to do it.


Right, giving up is actually how these things end up becoming principles/laws. Power centralizes because people become complacent and ignorant on matters of power, so there ends up being a power vacuum, to which others seize the opportunity. But absolute power centralization almost never occurs, due to the delegation that is necessary to wield that power in practice, and so these two forces end up balancing each other. As such, the equilibrium point (or point of maximum entropy) ends up being some type of oligarchy. But anyone can take steps to address this and adjust this equilibrium point, but it takes active work.


Exactly this


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: