Language design has to balance expressiveness with maintainability, and there is absolutely no question that some additions lean far more to the former than the latter.
Operator overloading, as mentioned, is something that seems fantastic when you're banging out a bunch of code. When you return to that code a month later, however, with no context, it leads to mystery code with completely undefined behavior without tracing back through every constituent. We constantly see people make the (unsupported) claim that scientific coding simply needs operator overloading, and while I can't speak specifically to that industry, in the financial industry operator overloading is how you end up with terrible, mystery-meat code.
I think a language should either have operators, or it should not.
There's a decent case for not having operators at all. Lisp being a fine example of this. Everything is a function, end of story.
But once you have operators, what's the rationale for restricting them to built-in types? This, to me, fundamentally makes no more sense than, say, banning user-created types altogether.
Clearly nothing needs operator overloading, but if you have operators at all, why should "a + b" be valid if and only if a and b are certain language primitives? If you think + should only be used for numeric addition, I'd totally agree, but numeric types don't have to be restricted to built-in ones.
I can certainly understand how that sentiment arose back in the dark ages of early operator-overloading abuse.
I'm starting to wonder, however, whether the horror stories are continuing to propagate long after anyone has seen a real, living monster.
If I'm browsing someone else's code today in a modern development environment and I encounter a function I don't know it's generally pretty easy to navigate to the definition (if it's in the codebase) or documentation (if it's not).
If operator overloading is just a function with a name that happens to be a string of symbols and with infix application at the call site, can't I find out about it just as easily?
What's the difference between
myMysteryFunction(a,b)
and
a + b
…if I know that the type of a and b isn't something ordinary like an int or a float?
Well, for me, the latter would masquerade as a trivial expression seen thousands of times, and which now may or may not have unexpected behavior. Every instance of something like a + b must now carry the slight extra cognitive load of potentially having a sort of "optional type annotation" in one's mind. Usually normal, but might not be.
The former is at least potentially self-documenting. And even if it's a badly named function, at least you know it's a special function, and you know you'll have to go look up its behavior.
Done right, it should masquerade as a trivial expression seen thousands of times. There's a real advantage in being able to have user-defined types that have the same interface as native types.
I want the ability to do 'a == b' regardless of whether it's a built-in or user-defined type. That's abstraction.
That's a good point, and I agree. The situation I'm thinking of is something like, say, you have an object or type called a "Tire". It has width, diameter, weight, price, tread depth, compound, etc. As you work with Tires in your program, you frequently end up having to compare their widths. So you overload the ">" operator to return true if one Tire has greater width than another. Your code ends up festooned with these comparisons.
Flash forward one year, you're long gone, and new developers on your team are left wondering which aspect of a Tire is used in comparisons with ">". Everything works, but it's easy to see how the ambiguity could lead to subtle submarine bugs; incorrect developer assumptions about the behavior of ">" may produce mostly -- coincidentally -- working code.
It's true that I can't provide evidence that scientific computing NEEDS operating overloading beyond my own anecdotal experience. However, I've also never seen any support that operator overloading muddies code, besides other people's anecdotal experience.
The simple example I always come back to is how hard it is to find the bug in the following code:
I'll also concede that I've seen some try abominations with oeprator overloading. Would people be willing to compromise with just allowing the overloading of addition, subtraction, multiplication, and division?
I know that overloading << or || can lead to some confusing code, but offering the basic arithmetic operators would end 90% of the whining by the people who do scientific code and need this functionality.
Do you prefer the C library interface for working on strings to the way that Java handles them, then?
After all, Java allows statements like "string3 = string1 + string2;", which is much more ambiguous than C's "strcpy( string3, string1 );
strcat( string3, string2 );" (or a modern equivalent where those functions are namespaced in a string class).
I work on 3D games. Sometimes I work in languages (c++, c#, shader languages) that allow me to use operator overloading (and thus infix notation) for 3D vectors and matrices. And sometimes I work in languages (actionscript, java, javascript) that don't allow me to use operator overloading and infix notation for vector and matrix math.
The code in the latter set of languages is much, much, much less readable by nearly any reasonable measure.
Let me expand on what I think is the point of the GP:
I hate operator overloading myself, but at the same time I'm also certain there are use cases where it is a big win. (I assume it is in Swift because it could be motivated well.)
You'll have to specify coding standards anyway -- and make certain that they are followed. Just add a paragraph about having to get a senior developer's signature on any use of operator overloading, on pain of needing to update the CV.
Operator overloading, as mentioned, is something that seems fantastic when you're banging out a bunch of code. When you return to that code a month later, however, with no context, it leads to mystery code with completely undefined behavior without tracing back through every constituent. We constantly see people make the (unsupported) claim that scientific coding simply needs operator overloading, and while I can't speak specifically to that industry, in the financial industry operator overloading is how you end up with terrible, mystery-meat code.