"The" future of software engineering is a silly thing to predict. I might predict one substantial change is that we get our house a little more in order about universities and the private sector distinguishing between computer science, software engineering, and software development. Obviously they are not cleanly separated[1], but LLMs will affect each subfield very differently.
- The impact on computer science seems almost entirely negative so far: mostly the burden of academic wordslop, though an additional negative impact is AI sucking all the air out of the room. What's worse is how little interesting computer science has come out of the biggest technological development with computers in many years: in fact there has been a terrible and very sudden regression of scientific methodology and integrity, people rationalizing unscientific thinking and unprofessional behavior by pointing to economic success. I think it'll take decades to undo the damage, it's ideological.
- The impact on software development actually does seem a bit positive. I am not really a software developer at all. It always felt too frustrating :) However the easing of frustration might be offset by widespread devastation of new FOSS projects. I don't want to put my code online, even though I'm not monetizing it. I'm certainly not alone. That makes me really sad. But I watched ChatGPT copy-paste about 200 lines of F# straight from my own GitHub, without attribution. I'm not letting OpenAI steal my code again.
- Software engineering... it does not seem like any of these systems are actually capable of real software engineering, but we are also being adversely affected by an epidemic of unscientific thinking. Speaking of: I would like to see Mythos autonomously attempt a task as complex and serious as a C compiler. Opus 4.6 totally failed (even if popular coverage didn't portray it as such):
The resulting compiler has nearly reached the limits of Opus’s abilities. I tried (hard!) to fix several of the above limitations but wasn’t fully successful. New features and bugfixes frequently broke existing functionality.
"Future of software engineering" folks should stuff like this in mind. What model is going to undo Mythos's mess? What if that mess is your company's product? Hope you know some very patient humans!
[1] They should have different educational tracks. There is no reason why a big fancy school like MIT can't have computer scientists do something like SICP and software engineers do the applied Python class. Forcing every computer professional into "computer science" is just silly; half the students gripe about how useless this theory is, the other half gripe about how grubby the practice is. What really sucks here is that I think Big Tech would support the idea, we're just stuck in a weird social rut.
I feel like LLMs[1] are going to cause a kind of "divorce" between those who love making software and those who love selling software. It was difficult for these two groups to communicate and coordinate before, and now it is _excruciating_. What little mutual tolerance and slack there was, is practically gone.
Open source was always[2] a fragile arrangement based on the kind of trust that involves looking at things through one's fingers (turning a blind eye may be more idiomatic in English), and we are at the point where you just have to either shut your eyes, or otherwise stop pretending that the situation can be salvaged at all.
Just a thought I had: some people think that LLM-shaming is declasse, and maybe it is, but I think that perhaps we _should_ LLM-shame, until the AI-companies train their LLMs to actually give attribution, if nothing else (I mean if it can memorize entire blocks of code, why can't it memorize where it saw that code? Would this not, potentially, _improve_ the attribution-situation, to levels better than even the pre-LLM era? Oh right, because plagiarism might actually be the product).
[1]: Not blaming the tech itself, but rather the people who choose to use it recklessly, and an industry that is based almost entirely on getting mega-corporations to buy startups that, against the odds, have acquired a decent number of happy-ish customers, that can now be relentlessly locked-in and up-sold to.
To toss them because the level of damage they have done it's astounding. Tons of companies are still fixing the losses from vibe coding.
What we need it's better code analizers, lexers and the like. And LLM's are practically the opposite because they can't never, ever give a concise answer by design. Worse, they rot over time.
> Tons of companies are still fixing the losses from vibe coding.
Well, you have to separate "future of" from "ensuing damage". This is similar to the fishing industry. Fishermen in the past used spears, rods, small nets, nowadays annual national catch statistics are reported in kilotonnes. They are destroying the ocean floor, causing massive extinction of species, causing irreversible damage. Yet, you can't argue looking 100-150 years back that industrial fishing was not "the future of the fishing industry". That is also why programmers won't ever disappear because of AI progress. Just like we still need fishermen, we'd need programmers. The sad truth about this is that soon we truly may have no need for fishermen, because there's no fish left in the ocean.
Hmm... it's hard to imagine that fishing with dynamite ever caused species extinction; trawling industry definitely did. I don't think it's a fitting analogy, but I get what you're trying to say. I'm not arguing about the damage. The damage this human invention will cause is guaranteed. Just like plastics have. The answer to that is not "ban plastics completely" - kinda late for that, innit? The answer is "put resources into plastic research, make safe plastic possible". Maybe if we make safe, better AI, it will help with the plastic? If there's anything I've learned about humans - first, we probably cause a lot of damage.
Sometime around High Sierra, I changed my habit such that I don't upgrade to the next major release until August. By then, it's been patched a half dozen times or so. Yes, I'm basically a year behind all the time, but I don't need the new features.
Tahoe has made it so I will return to this upgrade strategy. I regret upgrading to Tahoe almost every day. If nothing else, the music apps on macOS and iOS cause me almost daily headaches.
Are you running Apple Silicon now? I’m still on an intel machine with Sequoia. I doubt that I’ll upgrade it to Tahoe this fall and stick with it until Sequoia stops receiving security updates. Then I’ll have to upgrade hardware I guess… hoping have things sorted by then.
That was not intentional but just a coincidence actually. I came up with Gershwin as something to be comparable to "Darwin" as a core OS. I originally wanted to combine the Linux kernel with a Userland "familiar to switchers" more like a BSD and build on that. I also decided early on it was best to focus on being a DE that could run on anything and make the underlying OS not matter as much. Everyone involved really liked the name, so I went with it.
Screenshots look like OS X 1.0 and nothing like Rhapsody. I've found the OS X aesthetics unpleasant compared to how Rhapsody looked like so it was the final straw pushing me to Windows :)
The GTK theme engine from GNUstep can also be used to set a "Rhapsody" theme. It just allows using GTK themes. Here is an example of what that looks like https://github.com/pkgdemon/screenshots/blob/main/yellowbox-... I'd also like to make a native theme for that layout at some point.
I currently have the WindowManager.app I am fixing up that draws native decorations with GSTheme on to X11 windows. The screenshot in the gershwin desktop repo shows the result with chromium. I am also working on a Ladybird native GNUstep port where I need to fix the toolbar, rendering issues, and get the codebase in shape for a proper PR. Then I want to start working on fixing up an existing SwiftUI bridge implementation. This would also be a welcome contribution if someone can offer to contribute before I can eventually get to it. If that doesn't happen I would like to create a a native theme for this at some point.
Should be doable to put a Rhapsody theme on it... GNUstep is very flexible in this regard. Thanks to Method Swizzling, themes can change things pretty substantially.
And yet again customer demand and financial gain supercede environmental concerns. There’s no hope for a better, less consumer-oriented culture if even the indie creatives among us acknowledge the problem yet succumb to it.
Less consumer-oriented culture demands brainwashing, totalitarianism and terror, to force people to not do things they naturally want to, when there is a capability for doing that (if there's no capability, a nation will be physically overwhelmed by other nations and cease to exist/replaced)...
Well... if we had a constant stream of inventions so that people will always have things they'd love to have but struggle to afford, then there won't be a need to induce demand. But we don't have nearly enough: people have more spare cash than inventions they want, are produced every year.
If we don't induce demand by brainwashing, what will people do? They will keep inflating bubbles buying up stocks (making economy even more unstable, and eventually undermining themselves), houses (making sure new generations can't buy theirs, depressing birth rates and giving rise to political radicalism), and crypto (which is absolute insanity). People need to be given ways to spend their spare cash, and nudged to do it as opposed to "investing" that cash (which is, in the true meanin of this word, mostly impossible because there aren't enough inventions to invest into).
Stanley tumblers, no, but even magpies like collecting rocks and buttons and things. Seeing a checkmark on an online digital widget just really doesn't scratch the same itch.
Nearly nobody cares about the load on “national and local government resources, local utility capacity, and roadway infrastructure” for any other day-to-day activity. Why should they care about the same for AI which for most people is “out there online” somewhere? Related my, crypto bros worried about electricity usage only so far as its expense went and whether they could move closer to hydro dams.
The parent comment's point is that we _should_ care because cheap frontier-model access (that many of us have quickly become hopelessly dependent on) might be temporary.
It's amazing that anyone that has seen anything in technology in the last 30 years can say, "better be careful. They might stop subsidizing this and then it's gunna get expensive!" is ridiculous. I can buy a 1Tb flash drive for $100. Please, even with every reason to amortize the hardware over the longest horizon possible are only going out 6 years. 64K should be enough for anyone right?
Yeah, I can't wait to buy some RAM for my PC! Oh, wait, the AI companies are buying up all the RAM sticks on the planet and driving up their prices to comical highs, surely these beacons of ethics and morality won't do the same with their services that are actively hemorrhaging Billions of dollars, they're providing these services to us out of the goodness of their black hearts and not any kind of monetary incentive after all!
They should care because they are expensive. If we become dependent on something that is expensive, we have to maintain a certain level of economic productivity to sustain our dependence.
For AI, once these companies or shareholders start demanding profit, then users will be footing the bill. At this rate, it seems like it'll be expensive without some technological breakthrough as another user mentioned.
For other things, like roads and public utilities, we have to maintain a certain level of economic productivity to sustain those as well. Roads for example are expensive to maintain. Municipalities, states, and the federal government within the US are in lots of debt associated with roads specifically. This debt may not be a problem now, but it leaves us vulnerable to problems in the future.
That's an accurate and sad truth about humanity in general, isn't it? We all feel safer and saner if we avoid thinking about how things really are. It's doubly true if our hands are dirty to some extent.
At the same time, I submit that ignoring the effectiveness of very small contingents of highly motivated people is a common failure mode of humanity in general. Recall that "nearly nobody" also describes "people who are the President of the United States." Observe how that tiny rounding error of humanity is responsible for quite a bit of the way the world goes - for good or ill. Arguably, that level of effectiveness doesn't even require much intelligence.
> Why should they care about the same for AI which for most people is “out there online” somewhere?
Well, some will be smart enough to see the problem. Some portion thereof will be wise enough to see a solution. And a portion of those folks will be motivated enough to implement it. That's all that's required. Very simple even if it's not very easy or likely.
I don't think there is one programming language that is best suited for all types of programs. I think that Rust is probably the best language currently in use for specifically implementing Unix coreutils, but I don't think that this implies that (say) Zig or Odin or Go or Haskell would necessarily be terrible choices (although I really would pick Rust rather than any of those).
But my point was that there's no reason to think that the specific package of design decisions that Rust made as a language is the best possible one; and there's no reason why people shouldn't continue to create new programming languages including ones intended to be good at writing basic PC OS utils, and it's certainly possible that one such language might turn out to do enough things better than Rust does that a rewrite is justified.
reply