I just want to point out that there’s a huge difference between thoroughly investigating the family after abuse of this magnitude has been proven, and making parents legally culpable for any harm that comes to their children in general.
We can react to the fact that mothers can do more to protect their children from abuse in many ways. We can give them better access to information and support in getting away from abusers. We can create better links between police and communities they serve. We can create more pathways for children to be exposed to healthy adult behavior and connections with healthy adults, even when the family is dysfunctional.
But when we find evidence that existing supports have failed, deeply investigating why is critical.
The investigators will be able to calculate how many rounds of abuse the victim suffered. The more it happened, the less likely it is the mother was unaware. And if course, the victim can tell us directly whether the mother knew. If so, she deserves a decade of her life in prison as well.
Good design allows systems to work without anyone knowing how the whole thing works.
AI and humans are labor that can be put to work designing and vetting such systems. The problem with AI isn’t that it builds things we don’t understand. It’s that we do not have much experience with its failure modes, limitations and risks. There are many unknown unknowns.
It’s directly analogous to the problem of hiring, management, outsourcing and contacting. Sure, we know that labor can produce massive, highly reliable systems nobody fully understands. But how do we coordinate labor, AI and human, to successfully produce the systems we actually need? What failure modes and advantages does AI introduce into the mix for specific projects?
That’s where the uncertainty comes from, not the lack of comprehensive knowledge of the systems themselves.
Academia is a huge place, and no, its basic function is absolutely not to suck up to power. Every academic I know sees their function as the opposite, even if few take advantage of the limited protections afforded by tenure to speak truth to powers
But Tyler Cowen’s calling in particular is ABSOLUTELY to suck up to power.
Only in a monopoly situation. If you have several companies with comparable models you can easily switch between, all desperate for revenue to recoup their massive capex. you’re fine.
Will the modal developer of 2030 be much like a dev today?
Writing software was a craft. You learned to take a problem and turn it into precise, reliable rules in a special syntax.
If AI takes off, we'll see a new field emerging of AI-oriented architecture and project management. The skills will be different.
How do you deploy a massive compute budget effectively to steer software design when agents are writing the code and you're the only one responsible for the entire project because the company fired all the other engineers (or never hired them) to spend the money on AI instead?
Are there ways of factoring a software project that mitigate the problems of AI? For example, since AI has a hard time in high-context, novel situations but can crank out massive volumes of code almost for free, can you afford to spend more time factoring the project into low-context, heavily documented components that the AI can stitch together easily?
How do you get sufficient reliability in the critical components?
How do you manage a software project when no human understands the code base?
How do you insure and mitigate the risks of AI-designed products? Can you use insurance and lower prices if AI-designed software is riskier? Can we quantify and put a dollar value on the risk of AI-designed software compared to human-designed?
What would be the most useful tools for making large AI-generated codebases inspectable?
When I think about these questions, a lot of them sound like things an manager or analyst might do. They don't sound like the "craft of code." Even if 1 developer in 2030 can do the work of 10 today, that doesn't mean the typical dev today is going to turn into that 10x engineer. It might just be a very different skillset.
Nitpick, blacksmiths typically did forging, which is hammering heated metal into shape with benefits for the strength of the hammered material. CNC is machining, cutting things into the shape you want at room temperature.
Forging is machine assisted now with tons of tools but its still somewhat of a craft, you can't just send a CAD file to a machine.
I think we're still figuring out where on that spectrum LLM coding will settle.
Blacksmiths also spent a lot of their time repairing things, whereas modern replacements primarily produce more things. Kind of an interesting shift. Economies and jobs change in so many ways.
It’s less prestigious because it doesn’t judge papers on novelty, only on technical accuracy. For incremental research like this, it is an appropriate choice. The lower prestige has no bearing on the accuracy of their findings.
A point I think is crucial to mention is that “effect size” is just standardized mean difference.
If a minority of patients benefit hugely and most get no benefit, then you get a modest effect size.
This is probably why this discussion always has a lot of people saying “yeah, it didn’t help me at all” and a few saying “it changed my life.”
I believe we should be focusing on more relevant statistical methods for assessing this hypothesis formally. Basically, using mean differences is GIGO if you assume you’re comparing a bimodal or highly skewed distribution to a bell curve.
I will say that my LMHNP ordered a blood test for me during my ADHD evaluation. I live in the Pacific Northwest and had a serious vitamin D deficiency, apparently like everyone else who doesn’t supplement. also got me started monitoring my blood pressure.
I can definitely tell that the Adderall I was prescribed had an immediate, huge benefit. Not sure about the vitamin D.
But I really appreciated that he took a wide angle look at my health.
https://www.cbc.ca/news/canada/trump-canada-yukon-1.3235254