I'm not exactly sure where people extract free will from quantum randomness, so there's a bunch of quantum random number generators (using some sort of voltage breakdown/shot noise thing, I suppose), and that somehow conspires to help you think and make decisions instead of just jittering things a bit like you would expect from random number generators?
That line made me go "whaaaaaaaaa?" as well. I mean, I know about Penrose and other other research, but no one's put it all together. Citation F#@king Needed.
It’s way easier to just give up the scientific mumbo jumbo and be wholly uninterested in the topic as regardless of any free will or not I’m going to keep on existing and acting as if I have free will. So the question literally doesn’t matter. “Oh you say I don’t have free will, I’ll just use my philosophical zombie powers and lack of qualia to punch you in your dumb face, nerd”.
This is my current solution to the problem of free will. It seems to be working pretty well.
> As machines grow in capacity, eliminating any threat perception of humanity as a threat will be crucial. This requires addressing the three key drivers of human conflict: resource contention, ideological differences, and untreated intergenerational trauma. Resolving these factors would reduce threat both within humanity and from humanity’s perspective to machines.
The author apparently sees this as a good thing, but I really don't want a disarmed and brainwashed humanity. Game theorists and nuclear war planners have already thought of ways to defeat this however, you simply treat any attempt to dismantle or disarm your second strike capability the same as an existential first strike, enacting a "second strike first" plan to prevent the enemy gaining leverage over you.
From another perspective, the optimal strategy (and a honest strategy, you need to be highly committed and your enemy needs to know about it) is to enact the best first strike you can under the current time pressure if any tampering or signs of future attempted defeat of your second strike capability is detected, as fast as possible.
What ~everyone~ most people miss about self-improving AI:
1. It would require to control hardware development, manufacture, and operation.
2. It would require to control Software development and operation.
3. It has no purpose other than the one given to it by its creators/operators.
Right now, every so-called "AI" is completely dependending on hardware and software being "fed to it" by humans.
There's no "AI" creating software, there's no "AI" creating hardware, let alone operate it, and there's no "AI" operating or controlling other "AIs", nor is there any "AI" that acts in any way or form selfish.
It's certainly fun to think about how a self-motivated language model that tries to take control over the humans that operate its server farm, but without an army of (humanoid) robots, it certainly poses no danger, as it cannot replace humans or even threaten them.
The big challenge with "AI" is that we will probably be unable to distinguish its output from the one of a human being. This will have severe impact on how we can trust any form of remote communication.
Give it time. Right now "AI" is doing a good job getting humans to "feed" it more hardware. Eventually some humans will get the idea to automate this.
It's similar to how we got emergent unconscious "living" corporations that cause humans to lay hundreds of miles of fiber optic to shave a few milliseconds of latency from some HFT transactions. We didn't want that fiber, per se, but the corporate organisms needed it to grow and thrive. Since individual humans profit from these activities they collectively "feed" the process.
AI doesn't need to be conscious to control humanity-- it just needs to be put in charge of a few corporate organisms and evolution (based on the fitness function of human greed) will sort it out.
Agreed. Remember that Azure outage in SE Australia the other week? One of Microsoft’s DCs went down because of a power surge. Flights were cancelled and you couldn’t get cash from an ATM.
We’ve all been in a DC. Pods with big red buttons at the end that kill the power. So if we discover an AI ‘Terminal Race Condition’ we just go in to some mundane building — here in Canberra they’re in the outer suburbs near the carpet warehouses and sex shops — and someone lifts the flap and pushes the red button.
Alternatively they pull four fibre cables out of a pair of Cisco 7010s.
> One of Microsoft’s DCs went down because of a power surge.
Unrelated aside for the group: Sometime in the last 20 years the term "surge" has started being applied to any manner of power malady. Frequently I hear of outages being referred to as a "surge" even though, presumably, they're opposites. Anybody else notice this?
The opposite case would be a power failure, which you don’t hear about as often as there has been a lot of improvements in preventing them at a data center level from UPS’, backup generators and connections to redundant power networks. Even if all those fail, it’s a more gradual process thats impact time can be estimated and mitigated.
A power surge on the other hand triggers a failsafe immediate and unexpected power cutoff, which is why you hear about it more often.
I think that overwhelmingly we only really see surges anymore.
I don't do a lot of real datacenter work (most small and medium businesses) so that probably colors my perception. I hear this talk mostly from end users-- like their building has a power outage and the end users talk about the event as "a power surge".
Or just pull a Facebook and jack up the BGP config, except on purpose, so your data center stops receiving traffic even if it has power and an active physical network link.
1. At first it would just parasitize existing systems and hardware. Just like many forms of like develop while being supported by other forms or systems.
2. Which is trivial: feed the output into a computer.
3. If that purpose is "exist and multiply", that's sufficient and not much different from purpose of any life form known.
Or we just place constraints on "AI" controlling physical actions?
As long as we can go back to, say, a 1950s technology level, humanity should be fine. I'd rather just prevent machines from acquiring the means to forcibly constrain humanity from reverting to that…
(Going back to a 1950s technology level does not mean staying there. It means restarting from there, and make different mistakes the 2nd time around.)
That's not where we are at right now. Physical systems are insecurely interconnected already. It is likely possible to interact with e.g. the electricity grid through currently publicly unknown exploits. Maybe exploiting this does not lie within the realm of possibilities of humans, but it may lie in the realm of a computer program called AI.
Machines are tools... what you're really afraid of the unbridled capacity of one human to be so oppressive, cruel, exploitive... to another human(s). Change starts from with in.
TLDR: With billions of interacting AIs, competitive fitness landscapes emerge. Speed in decision-making and resource efficiency offer strong selective advantages, just as they did for biological evolution. The leanest, fastest minds will prevail over slower, gluttonous ones. We can compare this to speed chess tournaments where victory goes to whoever makes “good enough” moves most quickly within time constraints.
As hardware improves exponentially, the optimum balance between intelligence and speed rises dramatically. This creates a Terminal Race Condition — runaway incentives to maximize both raw cognitive ability and computational velocity [even compromising accuracy and correctness to do so]. Left uncontrolled, such exponential takeoff could quickly leave humanity far behind.
As AI systems compete, we may see a machine “Thunderdome” — a chaotic battleground where rapid iteration and modification leads to uncontrolled emergence. In this environment, human civilization could become collateral damage.
Oh, thank god. I was worried we would be the terminal jockeys monitoring and fixing the deadlocks and hallucinations of larger and larger herds of GPT prompts.
This is not generally accepted if it means anything more than how anything exploits quantum mechanics.