CS Academia tends to lag behind industry practices. The research frontier can be very cutting edge, but course curriculum, assignments, and institutional norms are slower and more conservative. That’s usually manageable when the shift is something like cloud adoption, new tooling, or a new dominant programming language. But this particular industry trend, use of AI in software development, is massive and fast moving (especially the agentic workflow growth over the last 6 months). And we're just now understanding where everything fits in and its limitations.
Journal articles are sometimes years behind. There are still papers coming out that use GPT-3.5 (!) for their main result. These days I'm basically only reading arXiv preprints (and whatever is trending on GitHub).
Sure, you could argue it's like writing code that gets optimized by the compiler for whatever CPU architecture you're using. But the main difference between layers of abstraction and agentic development is the "fuzzyness" of it. It's not deterministic. It's a lot more like managing a person.
The endgame is clear. Mass surveillance combined with AI agents. Would almost be like having a personal government spy watching each individual person.
reply