Easy. Intertia and incompetence. Government is full of paper pushers who hav eno higher wish but to live comfortably on tax payers money until they retire. The key to survival is to do what everyone else is doing, and not to be the first to try anything new.
The good thing is, as soon as someone tries anything new, and it looks like it is a success, the paper pushers will join in as soon as they think it is safe, and try to steal the fame and glory.
This is just how the government and the public sector works.
Yeah, anyone who says 'the government should be ran like a company' has likely never worked in a large corporation. It's full of meaningless work, bullshit jobs and red tape.
Plus, fulfillment of wishes to users as opposed to IT architecture management. Users have been brainwashed to demand certain brands. When you combine this with an IT Management that lacks mid-term risk management or a vision, you get happy users and an IT landscape easily taken hostage by single vendors.
The big problem, and I say this as someone that appreciates some of the Microsoft technologies, is that it is always first and foremost about Office, and nothing else.
Ah but some of those are FOSS, they are, pity that most money and project steering only flows from one place.
Repeat the same listing exercise for every US big tech company and their influence on the computing industry at large, and possible geopolitcs, that is how we end up with HarmonyOS NEXT with ArkTS.
> Forgotten are Windows, XBox, DirectX, VC++, C#, F#, TypeScript, Github, VSCode, Azure, Teams, SQL Server, SharePoint, Dynamics,.... Ah but some of those are FOSS
Not exactly governments, but I work with NGOs in Germany, and plenty of them use Teams and other MS products, just because they receive them for free and don't have the budget to pay someone to install open source alternatives. Training is especially costly and in these environments people are not really "digital native". It's not even about age, but about culture: people here will do what they are trained to do and fear doing something they don't know, because they might "do something wrong".
I was responsible for a platform that gives free online storage, chat functions and videocalls (BBB) for NGOs, and had to hear these arguments over and over when discussing migrations.
So unless there is a political drive, together with good trainings and support, the transition is very very difficult.
Many European governments are reassessing their tech dependencies, especially after incidents like that. It raises significant concerns about privacy and autonomy when companies respond to geopolitical pressures.
I don't think that's a great example. If Kahneman claimed not to be susceptible, it would have greatly undermined his claims about the universality of these phenomena: many other people would presumably also not be susceptible.
If I remember correctly I took the interviewer's question to mean "now that you're aware of these cognitive biases are you still affected by them?" not "do you experience cognitive biases?". I don't see the first question at odds with the universality claim. The latter would be.
I think you're misunderstanding the point this paper is trying to make. They're interested in trying to distinguish whether AI is capable of solving new math problems or only capable of identifying existing solutions in the literature. Distinguishing these two is difficult, because self-contained math problems that are easy enough for LLMs to address (e.g. minor Erdos-problems) may have been solved already as subcomponents of other work, without this widely known. So when an AI makes progress on such an Erdos problem, we don't know if it had a new idea, or correctly identified an existing but obscure answer. This issue has been dogging the claims of AI solving Erdos problems.
Instead, here you get questions that extremely famous mathematicians (Hairer, Spielman) are telling you (a) are solvable in <5 pages (b) do not have known solutions in the literature. This means that solutions from AI to these problems would perhaps give a clearer signal on what AI is doing, when it works on research math.
I find it unbelievable that this question can't be settled themselves without posting this simply by asking the AI enough novel questions. I myself have little doubt that at least they can solve some novel questions (of course similarity of proofs is a spectrum so it's hard to draw the line at how original they are)
I settle this question for myself every month: I try asking ChatGPT and Gemini for help, but in my domains it fails miserably at anything that looks new. But, YMMV, that's just the experience of one professional mathematician.
You're wrong. The mistake could have been unfixable. That happens quite frequently (see: countless retracted claimed proofs of major results by professional mathematicians).
The thought police already arrived, see Columbia grant cancellations and Mahmoud Khalil [1].
[1] "Khalil is a “threat to the foreign policy and national security interests of the United States,” said the official, noting that this calculation was the driving force behind the arrest. “The allegation here is not that he was breaking the law,” said the official." https://www.thefp.com/p/the-ice-detention-of-a-columbia-stud...
It's nice to live in a world where actions have consequences. When the media coverage got too much, Marc Tessier-Lavigne finally had to resign as president of Stanford, so he could focus on his job as a Stanford professor.
I can't tell whether your post is a joke. Yes, Tessier-Lavigne was forced to resign. But Stanford let him stay on as a professor. That was terrible: they should have kicked him out of the university.
I'm no expert, but I suspect it is a longer process to remove someone from a tenured professor position, than to remove them as President. We don't know that they won't eventually happen.
There are betrayals so severe that a grindingly slow due process is actually itself an addition betrayal. Not arguing for a kangaroo court, but tenure should not be a defense for blatant cheating.
Interestingly, the asymptotically fastest known algorithm for minimum weight bipartite matching [A] uses an interior point method, which means it's also doing Riemannian optimization in some sense.
>>>
Jonathan Friedman, Sy Syms director of PEN America’s U.S. Free Expression programs, said:
“The irony cannot be lost here: government officials have used their positions to muscle out a scholar of authoritarianism from a prestigious lecture,"
<<<
That doesn't really change the fact that it's exhausting (and worse, "commercially offputting") to be reminded that we're careening towards the worst futures literally imagined. I stayed away from Soylent and I'll probably stay away from this, but thanks for the head's up. rimshot
As big PKD fans, that definitely flew over our heads a bit. Can def understand that view and understand why its is commercially exhausting especially because we agree that we are heading toward some of the worst futures possible, so did PKD. We definitely build with this in mind!
But, the starting point of Neural Networks in the ML/AI sense, is cybernetics + Rosenblatt's perceptron, research done mathematicians (who became early computer scientists)
That's why I wrote that it was unexpected.I'm not taking position of if this was deserved or undeserved, but this was clearly in the realm of physics and inspired by it.
Accepting wrong arguments in support of positions you have is not good way to live your life. It leads to constipation.
reply