Israel does military censorship, same as any other country, you can't communicate information that would harm the defense of the country or aid its enemies. Israel is in a hostile and armed neighborhood with frequent hot conflicts, both regular military and irregular forces, so the issue is fairly everpresent compared to other Western style democracies such as in Europe where their neighbors are peaceful.
Well, what is AI music? I uploaded many of my songs I produced over the last three decades to Suno and very much enjoy the new arrangements and great solos (see e.g. https://rochus-keller.ch/?p=1350). So yes, I hear AI music, and no, it's not fake at all. If you consider it fake, you should also consider all the singers fake who don't hit the right note without autotune, or all the kids who just turn some knobs or press some buttons. I think that's just the way music develops. AI is just another instrument, like the Synclavier in the eighties or its much cheaper siblings in the nineties, or more recent "aids" like Melodyne before there were music generating models.
These memory notes are just data he was trained on, so he will just train himself on the same data again? How can it judge by himself that these notes are good?
This is a fallacious argument that has been thoroughly debunked countless times, and frankly it has no place on a platform where we expect a baseline level of digital literacy.
Privacy isn't about hiding crimes, it's about limiting how much power one government has over you.
History has shown stuff that’s totally fine today can be treated like a problem tomorrow. A surveillance system built under a “good” government can be handed to a shady one.
You're confusing Motorola Mobility with Motorola Solutions. These haven't been part of the same company since 2011. We would happily support devices from Motorola Solutions with their collaboration too but have no contact or partnership with them as they're an entirely different company. We want to support more devices meeting our requirements and if people have issues with one of the choices due to their opinions on geopolitics they can use another.
I'd say you're paranoid. Nobody cares about you, and they won't invest billions just so they can see your hot nude pictures. There are much easier ways to get information out of a phone, no need for a backdoor.
If there were ever any backdoor in some phone, it would have been found. No smartphone company is gonna take that chance that someone will find their backdoor, it will literally kill the company.
Sometimes you become a target purely by chance. You may witness something you should not have seen, are at the wrong place at the wrong time, the "algorithm" glitches and increases your "thread level" by 5000%. In most of these situations preparations like running graphene os can be quite the boon.
Or think of friends and family. When they become the target, you are prepared, you have the knowledge and tools ready, you can be the guide that helps them navigate a hostile digital world.
This is such a low-iq argument I cannot even. Yes, nobody cares about OP, you, me, whatever - until they do. Not to mention general harvesting for profiling and propaganda reasons.
General: What do people in this city/country/region/etc are thinking - This is the main one where the data is used and collected, then grouped. It is extremely powerful information for targeted agenda whichever it might be.
Targeted: Oh, you or someone from your close ones went to a political protest? Too bad we have all this information to put you and your family in jail - This is where suddenly they will care about you, even when it is NOT YOU but someone from your close circles were the ones upsetting them.
Whether parent is paranoid or not, Pegasus literally is used to spy, just because the state might not care about his hot nude pictures does not mean they don't care about other phone usage.
"While NSO Group markets Pegasus as a product for fighting crime and terrorism, governments around the world have routinely used the spyware to surveil journalists, lawyers, political dissidents, and human rights activists."[0]
Information these they can be much as powerful as a bomb, for example, I could learn more about your calls and discover that you do something immoral but not illegal and use it to blackmail you.
As if spying on “governments around the world have routinely used the spyware to surveil journalists, lawyers, political dissidents, and human rights activists” wasn't already alarming, Pegasus has also been used to spy elected officials.
A recent court case investigating spying on 37 elected representatives [1] (including the prime minister, three ministers, and regional politicians) had to be closed in 2023 and again in 2026 “for lack of cooperation of the Israeli government”.
I'm guessing you missed out on the Snowden revelations? Or the news articles about federal agents literally laughing at private dick pics.
And your second paragraph seems to go on the premise that the average person care if there is a backdoor.
I don't know why you wouldn't take security seriously, when even the US government is telling everyone to be careful where they supply their devices because of spying. Just don't trust them to point the finger the right way.
The UK government is known to spy on anti genocide protestors.
The US government is known to spy on anti ICE protestors.
If you have an opinion your government doesn't like, or a potential future government doesn't like, there's a good chance you have or will be spied on.
Perhaps you lack a single opinion worth caring about, but most people do not.
>If there were ever any backdoor in some phone, it would have been found.
Not only have MANY been found, but the whole security industry is aware of them and works with/against those backdoors.
This is kind of like a mechanic not knowing what a car's exhaust does...
If AGI will ever come, then. Currently, AI is only a statistical machines, and solutions like this are purely based on distribution and no logic/actual intelligence.
I swear that AI could independently develop a cure for cancer and people would still say that it's not actually intelligent, just matrix multiplications giving a statistically probable answer!
LLMs are at least designed to be intelligent. Our monkey brains have much less reason to be intelligent, since we only evolved to survive nature, not to understand it.
We are at this moment extremely deep into what most people would have been considered to be actual artificial intelligence a mere 15 years ago. We're not quite at human levels of intelligence, but it's close.
All the answers for all your questions is contained in randomness. If you have a random sentence generator, there is a chance that it will output the answer to this question every time it is invoked.
But that does not actually make it intelligent, does it?
This is exactly how problem solving works, regardless of the substrate of cognition.
Start with "all your questions contained in randomness" -> the unconstrained solution space.
The game is whether or not you can inject enough constraints to collapse the solution space to one that can be solved before your TTL expires. In software, that's generally handled by writing efficient algorithms. With LLMs, apparently the SOTA for this is just "more data centers, 6 months, keep pulling the handle until the right tokens fall out".
Intelligence is just knowing which constraints to apply and in what order such that the search space is effectively partitioned, same thing the "reasoning" traces do. Same thing thermostats, bacteria, sorting algorithms and rivers do, given enough timescale. You can do the same thing with effective prompting.
The LLM has no grounding, no experience and no context other than which is provided to it. You either need to build that or be that in order for the LLM to work effectively. Yes, the answers for all your questions are contained. No, it's not randomness. It's probability and that can be navigated if you know how
You can constrain the solution space all you want, but if you don't have a method to come up with possible solutions that might match the constraints, you ll be just sitting there all day long for the machine to produce some results. So intelligence is not "just knowing which constraints to apply". It is also the ability to come up with solutions within the constraints without going through a lot of trial and error...
But hey, if LLMs can go through a lot of trial and error, it might produce useful results, but that is not intelligence. It is just a highly constrained random solution generator..
I believe that's I and the paper are both saying as well. The LLM is pure routing, the constraints currently are located elsewhere in the system. In this case, both the constraints and the motivation to perform the work are located in Knuth and his assistant.
Routing is important, it's why we keep building systems that do it faster and over more degrees of freedom. LLMs aren't intelligent on their own, but it's not because they don't have enough parameters
You are arguing a point no-one is making. LLMs are not random sentence generators. Its probability distributions are anything but random. You could make an actual random sentence generator, but no-one would argue about its intelligence.
Last week I put "was val kilmer in heat" into the search box on my browser. The AI answer came back with "No, Val Kilmer was not in heat. Val Kilmer played Chris Shiherlis in the movie Heat but the film did not indicate that he was pregnant or in heat. His performance was nuanced and skilled and represents a high point of the film." I was not curious about whether he was pregnant.
We are not only not close to human level of intelligence, we are not even at dog, cat, or mouse levels of intelligence. We are not actually at any level of intelligence. Devices that produce text, images, or code do not demonstrate intelligence any more than a printer producing pages of beautiful art demonstrate intelligence.
Honestly, when I read your first sentence, given the lack of a capital H, my brain initially went the same direction the AI did. Then I realized what you meant but since I already went there, I might have made a similar response as a joke. For the sake of my ego I'm forced to reject your claim that this is evidence of stupidity.
It's clearly just a hallucination. Everyone knows there was never a movie called Heat, Val Kilmer did not play Chris Shiherlis in it, and he has always been pregnant.
On Google, just clicking "AI Mode" gives you a substantially smarter model, and it's still pretty weak. But I assume the OP wasn't talking about Google because it doesn't seem to make this mistake even in a search.
It was bing as that is the default for Edge as supplied on my work laptop. It doesn't do this now, but it does do something else quite weird:
search: was val kilmer pregnant or in heat
answer:
Not pregnant
Val Kilmer was not pregnant or in heat during the events of "Heat." His character, Chris Shiherlis, is involved in a shootout and is shot, which indicates he is not in a reproductive or mating state at that time.
And then cites wikipedia as the source of information.
In terms of cognition the answer is meaningless. Nothing in the question implies or suggests that the question has to do with a movie. Additionally, "involved in a shootout and is shot, which indicates he is not in a reproductive or mating state" makes no sense at all.
If you asked a three-year-old a question that they proceeded to completely flub, would you then assume that all humans are incapable of answering questions correctly?
Nobody is arguing for the quality of the search overviews. The models that impress us are several orders of magnitude larger in scale, and are capable of doing things like assisting preeminent computer scientists (the topic of discussion) and mathematicians (https://github.com/teorth/erdosproblems/wiki/AI-contribution...).
Microsoft is bad at AI and this is a great example. I'm wondering if someone saw your post on HN and tried to hardcode a rule here, because I agree, it's nonsense. None of the actual AI companies are emitting nonsense like this.
It only took 4 years, but it appears that this view is finally dying out on HN. I would advise everyone who found this viewpoint compelling to think about how those same blinders might be affecting how you are imagining the future to look like.
The issue to my mind is a lack of data at the meeting of QFT/GR.
Afterall few humans historically have been capable of the initial true leap between ontologies. But humans are pretty smart so we can't say that is a requirement for AGI.
“The laws of nature should be expressed in beautiful equations.”
- Paul Dirac
“It is, indeed, an incredible fact that what the human mind, at its deepest and most profound, perceives as beautiful finds its realisation in external nature. What is intelligible is also beautiful. We may well ask: how does it happen that beauty in the exact sciences becomes recognizable even before it is understood in detail and before it can be rationally demonstrated? In what does this power of illumination consist?”
- Subrahmanyan Chandrasekhar
“I often follow Plato’s strategy, proposing objects of mathematical beauty as models for Nature.”
“It was beauty and symmetry that guided Maxwell and his followers.”
- Frank Wilczek
“Beauty, is bound up with symmetry.”
- Herman Weyl
"Still twice in the history of exact natural science has this shining-up of the great interconnection become the decisive signal for significant progress. I am thinking here of two events in the physics of our century: the rise of the theory of relativity and that of the quantum theory. In both cases, after yearlong unsuccessful striving for understanding, a bewildering abundance of details was almost suddenly ordered. This took place when an interconnection emerged which, thought largely unvisualizable, was finally simple in its substance. It convinced through its compactness and abstract beauty – it convinced all those who can understand and speak such an abstract language."
- Werner Heisenberg
Maybe (just maybe) these things (whatever you want to call them) will (somehow) gain access to some "compact", beautiful, "largely unvisualizable" "interconnection" which will be the self-evident solution. And if they do, many will be sure to label it a statistical accident from a stochastic parrot. And they'll right, for some definitions of "statistical", "accident", "stochastic", and "parrot".
Donald Knuth is an extremal outlier human and the problem is squarely in his field of expertise.
Claude, guided by Filip Stappers, a friend of Knuth, solved a problem that Knuth and Stappers had been working on for several weeks. Unfortunately, it doesn't seem (from my quick scan) to have been stated how long (or how many tokens or $) it took for Claude + Stappers to complete the proof.
In response, Knuth said: "It seems that I’ll have to revise my opinions about “generative AI” one of these days."
Seems like good advice. From reading elsewhere in this comment section, the goalposts seem to be approaching the infrared and will soon disappear from the extreme redshift due to rate at which they are receding with each new achievement.
What goalposts do you think are being moved? I constantly see AI enthusiasts use this phrase, but it’s not clear what goalposts they have in mind. Specifically, what is it that you want opponents to recognize that you believe they aren’t currently?
We now have a tool that can be useful in some narrow domains in some narrow cases. It’s pretty neat that our tools have new capabilities, but it’s also pretty far from AGI.
Imagine hearing pre-attention-is-all-you-need that "AI" could do something that Donald Knuth could not (quickly solve the stated problem in collaboration with his friend).
The idea that this (Putnam perfect, IMO gold, etc) is all just "statistical parrot" stuff is wearing a little thin.
reply