That's a legitimate question and it has no good answer. Not just Sudan. There is an ongoing genocide in Myanmar, against the Rohingya. There is an ongoing genocide against the Uyghurs in china. None of those get nearly the amount of coverage the genocide in Gaza gets, or, now the war in Iran and Lebanon.
I have no idea why. I have recently started to grow a bit paranoid and wonder whether I am being manipulated by the media I consume. That would not be a huge surprise, I'm willing to bet most people are influenced by some of the things they read online.
Anyway this is an interesting question that has to be answered: why only Gaza, and not the other genocides?
If you really cared about those other conflicts, I'd expect to see you mention them more often in your comments. Are you sure you actually care about them or you just want people to stop talking about Gaza?
Super easy answer: because only on Gaza your government openly sides with the perpetrators, arms and finances them, the media justify them, laws are passed to curb criticism and punish boycotts, and people in online discussion forums bring up always the same debunked arguments and rhetorical devices to divert the attention [1], blame the victims and justify the perpetrators. It's the disagreement that fuels the discussion, the obvious contrast between the right position and the official statements and public propaganda.
1- of which yours is a classic example: "why talk about this and not about something else?"
>> Written by the creator of the Daleks, Terry Nation, and Dennis Spooner, the serial starred Hartnell and Purves alongside an early appearance by Nicholas Courtney as Bret Vyon, Adrienne Hill as Katarina, and Kevin Stoney as Mavic Chen.
I have to ask, are there really "anti-AI activists"? Like, are there people marching against AI, attacking data center, spray-painting "AI OUT" on computers, and so on? Or is it just an exaggeration by Carmak?
This is a conversation forum, so it's natural for people to ask questions of each other. Sure, we could, in principle, ask Google, or ChatGPT for everything, but then why have an online conversation at all?
Basically, a holdover from the days of symbolic AI, from back when neural network ML wasn't the dominant AI paradigm.
Some people in the "symbolic AI" camp didn't take the loss well, so they pivoted towards "ML is not real AI and it needs a symbolic component to be a real AI", which is: the neurosymbolic garbage.
This work isn't exactly that, and I do think it can amount to something useful, but the justification for it reeks of something similar.
Full disclosure: all my published work is on symbolic machine learning (a.k.a. Inductive Logic Programming) :O
I think you're confusing various different things as "neurosymbolic AI". There is a NeSy symposium and I happen to have met many of the people there, and they are not GOFAI ideologues, rather they recognise the obvious limitations of neural nets (i.e. they're crap at deduction, though great at induction) and they look for ways to address them. Most of that crowd also has a predominantly statistical ML/ neural nets background, with symbolic AI as an afterthought.
I don't think I've ever heard anyone say that "ML is not real AI" and I mainly move in symbolic AI circles. I would check my sources, if I were you.
Anwyay, honestly, this is 2026, there is no sensible reason to be polarised about symbolic vs. statistical AI (or whatever distinction anyone wants to make). An analogy I like to make is as follows: a jetliner is a flying machine, a helicopter is a flying machine. We can use both for their advantages and disadvantages, but a flying machine is something too useful to give up on any one kind for ideological reasons. The practical benefits overwhelmingly make up for any ideological concerns (e.g. "jets bad" or "propellers bad").
And just to be clear, symbolic AI is still in rude health: automated theorem proving, planning and scheduling, program verification and model checking, constraint satisfaction, discrete optimisation, SAT solving, all those are fields where symbolic approaches are dominant, and where neural nets have not made significant inroads in many decades; nor are they likely to, not any more than symbolic approaches are likely to make any inroads in e.g. machine vision, or speech recognition. And that's just fine: lots of tools, lots of problems solved.
I don't think symbolic approaches are completely useless. It's just that they're solving yesterday's problems 1.12% better. While ML is cracking open entirely new fields - and might go all the way to AGI, the way it's going now.
One is near the end of its potential while another is only picking up steam.
In many ways, the space ML dominates now is the space of "all the things symbolic approaches suck ass at". Which is a very wide space with many desirable things in it.
Well, neural nets do what neural nets do best (not ML in general, which is a broader field), so if a lot of funding is going to neural nets then we'll see a lot of progress on the stuff neural nets are best suited for. No surprise. If Google et al were spending billions on symbolic AI maybe we'd see equally spectacular results there too. Maybe not. But we won't know because they don't.
There's no sense in which symbolic AI is at the end of its life and if you pay close attention you'll see that LLMs are trying to do all the things that symbolic AI is good at: major examples being reasoning, and planning from world models.
And as nextos says in the sibling comment most of the recent successes of LLMs in tasks that go beyond language generation, e.g. solving math olympiad problems, are the result of combining LLMs with symbolic verifiers.
>> While ML is cracking open entirely new fields - and might go all the way to AGI, the way it's going now.
I don't agree. Everything that neural nets do today, speech recognition, object identification in images, machine translation, language generation, program synthesis, game playing, protein folding, research automation, I mean every single thing really, is a task that comes from the depths of AI history. There's a big discussion to be had about why those tasks are "AI" tasks in the first place and what they have to do with "intelligence" in the broader sense (e.g. cats are intelligent but they can't generate any sort of text) but this discussion is constantly postponed as we all breathlessly run up the hill that neural nets are climbing. When we get to the top and find it was the wrong hill to climb, maybe we'll have that discussion at last, or maybe the entire industry, academia in tow, will run after the Next Big Thing in AI™ all over again. But- cracking open new fields? Nah. Not really.
AGI is not going to happen any time soon though. We have no idea what we're doing in terms of reproducing intelligence, that much is clear.
The whole notion of "we need to know what intelligence is exactly to reproduce it" is completely and utterly wrong.
It's also the kind of thinking that results in "neurosymbolic garbage is good actually".
What neural nets do today is basically "everything humans do". There is no longer a list of "things computers can't do" - just a list of things computers do worse than the top 1% of humans. Ever shrinking.
Well, for example a computer can't make me an omelette. There's tons of examples like that, pretty much everything humans "can do" with our bodies, that computers can't- not just because they don't have bodies, but because even when we give them bodies we can't program them to do the things we want them to. LLMs don't help at all here. They can easily fake knowing what to do but the -not few- attempts people have made to connect LLMs to a robot to get the LLM to drive the robot like a little AI brain have ... not really worked out? I guess? Not even self-driving cars use LLMs.
Speaking of self-driving cars' AIs, while they have plenty of machine learning components, e.g. for vision, SLAM, and so on, they are largely hand-coded, rule-based systems. Just like the good old days of GOFAI.
>> The whole notion of "we need to know what intelligence is exactly to reproduce it" is completely and utterly wrong.
I can't see anything about "training a transformer". I'm trying to understand if e.g. the Sudoku solver was learned from examples (in which case, what examples?) or whether it was manually coded and then "compiled" into weights.
There is no training in the usual sense of the term, i.e. no gradient descent, no differentiable loss function. They use deceptive language early on to make it sound this way, but near the end make it clear their model as is isn't actually differentiable, and in theory might still work if made differentiable. But they don't actually know.
But IMO this is BS because I don't know how one would get or generate training data, or how one would define a continuous loss function that scores partially-correct / plausible outputs (e.g. is a "partially correct" program / algorithm / code even coherent, conceptually).
Yeah, a "100% correct" Sudoku solver fully trained by gradient descent from examples? That sure would be something entirely new.
To answer dwa3592, it's always possible to set the weights of a neural net by hand, albeit extremely fiddly and normally only done "on paper". This is e.g. how the Turing-completeness of RNNs was shown back in the '90s:
So, what I'm trying to understand, and I can't find any clear information about that in the article, is how they "compiled" e.g. the Sudoku solver into a Transformer's weights. Did they do it manually? Say, they took the source of a hand-coded Sudoku solver and put it through their code-to-weight compiler, and thus compiled the code to the Transformer weights? Or did they go the Good, Old-Fashioned, Deep Learning way and train their Transformer to learn a ("100% correct"!) Sudoku solver from examples? And, if the latter, where's the details of the training? What did they train with? What did they train on? How did they train? etc etc.
My interpretation is that they built a simple virtual machine directly into the weights, then compiled a WASM runtime for that machine, then compiled the solver to that runtime.
Nope, they encoded or compiled in a simple VM / WASM interpreter to the transformer weights, there is no training. You'd be forgiven for this misreading, as they deliberately mislead early on that their model is (in principle) trainable, but later admit that their actual model is not actually differentiable, but that a differentiable approximation "should" still work (despite no info about what loss function or training data could allow scoring partially correct / incomplete program outputs).
Thanks, but where do they say that? I can only find this instance of "different" (as in "differentiable") in their article:
Because the execution trace is part of the forward pass, the whole process remains differentiable: we can even propagate gradients through the computation itself. That makes this fundamentally different from an external tool. It becomes a trainable computational substrate that can be integrated directly into a larger model.
In the section "Programs into weights & training beyond gradient descent", near the end, they say:
[...] *the compilation machinery we built for generating those weights** can go further. In principle, arbitrary programs can be compiled directly into the transformer weights, bypassing the need to represent them as token sequences at all. [...] [my emphasis]
In the same section, they also continue:
Weights become a deployment target: instead of learning software-like behavior, models contain compiled program logic.
If logic can be compiled into weights, then gradient descent is no longer the only way to modify a model. Weight compilation provides another route for inserting structure, algorithms, and guarantees directly into a network.
So they (almost-invisibly) admit they compile in the weights, but make it clearer this was the whole intention the whole time in later sentences.
>> So then, when you see this picture (and remember, it might only be showing half of the whole setup), do you think "wow, cool, they got to wrangle all of that", or do you think "OMG they had to wrangle all of that"? It's an important distinction to make, and I think someone's gut reaction to this amount of hardware in one place might influence how they approach building new systems.
It's a very good point. Like, I have some stuff that eats up RAM like a, a very hungry thing, so I went online to see if I could buy some old server blade with a couple TB of RAM from ebay. I found a few, refurbished, not in a horrible condition, not prohibitively expensive (I'm not currently funded, as such) and I remember this distinct feeling, like a frisson of excitement at the thought of having access to ~20 times more POWER than I usually have...
... and then I cooled down, didn't buy a server, and instead rented one with "only" 256 GB RAM until I could fix my stuff so that it now runs with up to 8GB on my laptop. Still expensive, but we're getting there.
Morale of the story: don't know. I prefer to find ways to make software go faster than rely on hardware? I get the feeling I'm very alone on this, seeing as everyone's talking about putting nuclear-powered server farms in space and whatnot.
I'm sorry to say but that's ignorant bullshit. Freedom, not only of speech, but of expression and information, is enshrined in the EU Charter of Fundamental Rights, the major legal document of the EU; whence I quote:
Article 11 - Freedom of expression and information
1. Everyone has the right to freedom of expression. This right shall include freedom to hold opinions and to receive and impart information and ideas without interference by public authority and regardless of frontiers.
2. The freedom and pluralism of the media shall be respected.
The link above clarifies that this article corresponds to article 10 of the European Convention of Human Rights (a legal document of the Council of Europe, a different body than the EU, whose members are a strict subset of the members of the CoE). Article 10 delimits restrictions to the right of expression, as follows:
2. The exercise of these freedoms, since it carries with it duties and responsibilities, may be subject to such formalities, conditions, restrictions or penalties as are prescribed by law and are necessary in a democratic society, in the interests of national security, territorial integrity or public safety, for the prevention of disorder or crime, for the protection of health or morals, for the protection of the reputation or rights of others, for preventing the disclosure of information received in confidence, or for maintaining the authority and impartiality of the judiciary."
And as the article points out, those restrictions apply to article 11 freedoms also, acting as an upper bound:
Pursuant to Article 52(3) of the Charter, the meaning and scope of this right are the same as those guaranteed by the ECHR. The limitations which may be imposed on it may therefore not exceed those provided for in Article 10(2) of the Convention, without prejudice to any restrictions which Community competition law may impose on Member States' right to introduce the licensing arrangements referred to in the third sentence of Article 10(1) of the ECHR.
In other words, yes, there is freedom of "speech", a.k.a expression, in the EU and it, and its limits, are enshrined in law.
I hate to make assumptions but there are a few public figures from the US that have argued that "Europe" has no freedom of speech like the brave US, like Paul Graham and Elon Musk, but they're talking out of their backsides.
Let's see how much those EU charter papers are worth when you get arrested for a tweet, which happens a lot.
Vague Hate speech laws alone can utterly remove any of those "human rights". In Germany and the UK people get charged for jokes or sarcastically criticizing politicians all the time. That is not free speech or even free expression regardless how much you want to cope to own musk or whoever the news told you to hate this week.
People get arrested for tweets like calling for the homes of asylum seekers to be set on fire. Something like this, clear and direct calls for violence and criminal acts, usually turn up when you research any of those cases brought as examples for censorship in the EU or UK.
There is a difference between the EU and the UK: the UK is no longer a signatory to the EU Charter of Fundamental Rights. On the other hand, it is still a signatory to the European Convention of Human Rights, but of course there has been a sustained, concerted effort to exit it, disparage it as "a villain's chapter" etc, at least since I've been in the country (2005).
The environment in the EU and the UK is also very different in terms of freedom of speech. In the UK for example I've kept tabs on people being put in jail for completely ludicrous reasons, like a kid who was accused of planning a Sandy Hook - like attack because he had a backpack with batteries, stones and ziplocks in it, and a young woman who was put in jail for writing poetry, with themes interpreted as terrorist sympathy. Unfortunately my bookmarks are on my other computer and I can't access them now and a search online only brings up more current cases that I don't know much about because I haven't looked into them yet, so I'll have to ask you to just have to believe me about that.
On the other hand you can find plenty of information about Drill music used as evidence in criminal trials, which is also a form of criminalisation of expression that should be banned under both the EU and CoE laws:
Then again there is the appalling treatment of climate activists and anti-war protesters in the UK, like for example the recent proscription of Palestine Action as a terrorist organisation and the arrest of hundreds of its supporters that followed. But I believe that protesting e.g. the carnage in Palestine is not treated much better in Germany or France.
The important thing to keep in mind however is that all of that is the result of authorities overstepping bounds and claiming for themselves powers that they don't have, which happens in such freedom of speech-loving countries like the US even more often. The fact of the matter is that the law of the land protects freedom of expression and we do not live in some dystopian dictatorship where you're bundled up if you so much as dare to make a squeak about the government, or the authorities.
Using a banned ad to claim otherwise is simply disingenuous.
It's too bad I can't block users who think offending their interlocutor is a form of debate on HN. I do not "cope" and I don't give a shit about Musk, he's the one who has faithful followers who parrot his ignorant bullshit on social media, which btw I do not follow. Don't sound like him and I won't wonder whether that's what you're doing.
I guess it depends on how you perceive "censorship". I wouldn't think of banning a misleading ad as censorship. My country, Greece, was under a military dictatorship for a few years in the 1960's and 70's, and censorship involved e.g. pre-approving all music, including not just song lyrics but also the music scores. Works by the two major Greek composers, Theodorakis and Hatzidakis [1] were banned outright and could not be played anywhere under pain of pain [2]. Obviously everything anyone wanted to publish in the press had to be pre-approved by state censors and any criticism of the regime, either written or simply spoken out loud, was punishable... you get the gist.
Not allowing advertisers to lie to advertise their product is I think not a kind of "censorship" one really needs to be worried about. They're free to advertise their product otherwise, they're just not free to lie to do it.
I feel silly making this elementary point, but freedoms can't ever be absolute in a society of more than one humans. Even in the US I bet you're free to drive, but you're not free to drive drunk. You're free to have sexual relations, but not with a minor. You're free to walk anywhere you like but not in other peoples' property and not on the streets with the cars (which btw is perfectly fine in Europe and it's rules about jaywalking that are "pants on head" for us).
These are rules. Societies have rules. They should have them. There's no problem with that.
And now my 16-year old self is very disappointed that I've grown up to be a conservative, establishmentarian fossil.
___________
[1] Coincidence. We're not all called something-akis.
I have no idea why. I have recently started to grow a bit paranoid and wonder whether I am being manipulated by the media I consume. That would not be a huge surprise, I'm willing to bet most people are influenced by some of the things they read online.
Anyway this is an interesting question that has to be answered: why only Gaza, and not the other genocides?
reply