Seldom a good idea. And faulty in so many ways. eg.
It's long been a tenet at the Schindler bitranch that when you find a block of code with several bugs, it's time to dump the whole file and write it again from scratch. It's faster than trying to squash all the bugs in your bug factory.
So now you've traded bugs you know about for bugs you know nothing about.
The real key isn't rewriting, it's refactoring. And in my opinion, should be done almost continually.
Actually, the idea that most bugs are produced by faulty modules, and that such faulty modules don't produce fewer bugs as a result of efforts to fix them (and so are best thrown out), is one of the few interesting results produced by experimental studies of software development. So I don't think it should be dismissed so cavalierly.
I don't have time to track down the original sources on this, but I do remember that one of the places I read about it is in Robert Glass' Facts and Fallacies of Software Engineering, which is worth reading because it's (a) well-grounded in the research literature, yet (b) not stupendously boring.
It's worth noting that the findings in question have to do with rewriting faulty modules and not to do with rewriting entire systems (which is a quite different matter). But this is also what the OP was talking about, so I think this evidence, for what it's worth, supports her claim.
Agreed - the distinction is between (1) the code is simply buggy and (2) the code is full of bugs because the design is wrong/inadequate.
I've realised over time that fixing a bug in a decent design is a lot of the value (of software). A single line of bugfix can be worth reams of new code. It's why banks are still running COBOL mainframes.
The two big variables are the skill of the architect, and their understanding of the real problem.
Rewriting is valid if the new architect is more skilled and understands the problem better. Unfortunately it's quite likely that the new architect doesn't understand the problem better, in which case the new architecture will likely be worse than the first.
To complicate matters, the problem may change over time as well. So if you sink the time into a rewrite, there's no guarantee the new version solves the latest version of the problem any better than the first.
Given the uncertainties, I have to see a real solid reason and clear benefits before doing a rewrite. Just because there are some code smells or a lot of bugs doesn't mean I can come up with a better architecture necessarily. I'd rather refactor a bit at a time and see what shakes out.
> The two big variables are the skill of the architect, and their understanding of the real problem.
I read the article as discussing when to throw your own code away and start over. Under those circumstances, aren't you more skilled today than you were when you wrote the original? Is your understanding of the problem better or worse?
For that matter, is it even the same problem, or has the problem itself changed since you first wrote the code?
It's long been a tenet at the Schindler bitranch that when you find a block of code with several bugs, it's time to dump the whole file and write it again from scratch. It's faster than trying to squash all the bugs in your bug factory.
So now you've traded bugs you know about for bugs you know nothing about.
The real key isn't rewriting, it's refactoring. And in my opinion, should be done almost continually.